00:00:00.001 Started by upstream project "autotest-nightly" build number 3912 00:00:00.001 originally caused by: 00:00:00.002 Started by user Latecki, Karol 00:00:00.003 Started by upstream project "autotest-nightly" build number 3911 00:00:00.003 originally caused by: 00:00:00.003 Started by user Latecki, Karol 00:00:00.005 Started by upstream project "autotest-nightly" build number 3909 00:00:00.005 originally caused by: 00:00:00.005 Started by user Latecki, Karol 00:00:00.006 Started by upstream project "autotest-nightly" build number 3908 00:00:00.006 originally caused by: 00:00:00.007 Started by user Latecki, Karol 00:00:00.076 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.077 The recommended git tool is: git 00:00:00.077 using credential 00000000-0000-0000-0000-000000000002 00:00:00.079 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.110 Fetching changes from the remote Git repository 00:00:00.112 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.152 Using shallow fetch with depth 1 00:00:00.152 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.152 > git --version # timeout=10 00:00:00.204 > git --version # 'git version 2.39.2' 00:00:00.204 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.233 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.233 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/changes/29/24129/6 # timeout=5 00:00:04.218 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.231 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.243 Checking out Revision e33ef006ccd688d2b66122cd0240b989d53c9017 (FETCH_HEAD) 00:00:04.243 > git config core.sparsecheckout # timeout=10 00:00:04.254 > git read-tree -mu HEAD # timeout=10 00:00:04.270 > git checkout -f e33ef006ccd688d2b66122cd0240b989d53c9017 # timeout=5 00:00:04.290 Commit message: "jenkins/jjb: remove nvme tests from distro specific jobs." 00:00:04.290 > git rev-list --no-walk 6b67f5fa1cb27c9c410cb5dac6df31d28ba79422 # timeout=10 00:00:04.399 [Pipeline] Start of Pipeline 00:00:04.413 [Pipeline] library 00:00:04.415 Loading library shm_lib@master 00:00:04.415 Library shm_lib@master is cached. Copying from home. 00:00:04.430 [Pipeline] node 00:00:04.438 Running on VM-host-SM0 in /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:00:04.439 [Pipeline] { 00:00:04.448 [Pipeline] catchError 00:00:04.449 [Pipeline] { 00:00:04.458 [Pipeline] wrap 00:00:04.466 [Pipeline] { 00:00:04.474 [Pipeline] stage 00:00:04.475 [Pipeline] { (Prologue) 00:00:04.489 [Pipeline] echo 00:00:04.490 Node: VM-host-SM0 00:00:04.494 [Pipeline] cleanWs 00:00:04.504 [WS-CLEANUP] Deleting project workspace... 00:00:04.504 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.511 [WS-CLEANUP] done 00:00:04.740 [Pipeline] setCustomBuildProperty 00:00:04.841 [Pipeline] httpRequest 00:00:04.864 [Pipeline] echo 00:00:04.866 Sorcerer 10.211.164.101 is alive 00:00:04.874 [Pipeline] httpRequest 00:00:04.879 HttpMethod: GET 00:00:04.879 URL: http://10.211.164.101/packages/jbp_e33ef006ccd688d2b66122cd0240b989d53c9017.tar.gz 00:00:04.880 Sending request to url: http://10.211.164.101/packages/jbp_e33ef006ccd688d2b66122cd0240b989d53c9017.tar.gz 00:00:04.882 Response Code: HTTP/1.1 200 OK 00:00:04.882 Success: Status code 200 is in the accepted range: 200,404 00:00:04.883 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp_e33ef006ccd688d2b66122cd0240b989d53c9017.tar.gz 00:00:05.532 [Pipeline] sh 00:00:05.815 + tar --no-same-owner -xf jbp_e33ef006ccd688d2b66122cd0240b989d53c9017.tar.gz 00:00:05.829 [Pipeline] httpRequest 00:00:05.852 [Pipeline] echo 00:00:05.854 Sorcerer 10.211.164.101 is alive 00:00:05.861 [Pipeline] httpRequest 00:00:05.865 HttpMethod: GET 00:00:05.866 URL: http://10.211.164.101/packages/spdk_f7b31b2b9679b48e9e13514a6b668058bb45fd56.tar.gz 00:00:05.867 Sending request to url: http://10.211.164.101/packages/spdk_f7b31b2b9679b48e9e13514a6b668058bb45fd56.tar.gz 00:00:05.876 Response Code: HTTP/1.1 200 OK 00:00:05.876 Success: Status code 200 is in the accepted range: 200,404 00:00:05.877 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk_f7b31b2b9679b48e9e13514a6b668058bb45fd56.tar.gz 00:01:31.897 [Pipeline] sh 00:01:32.173 + tar --no-same-owner -xf spdk_f7b31b2b9679b48e9e13514a6b668058bb45fd56.tar.gz 00:01:35.591 [Pipeline] sh 00:01:35.873 + git -C spdk log --oneline -n5 00:01:35.873 f7b31b2b9 log: declare g_deprecation_epoch static 00:01:35.873 21d0c3ad6 trace: declare g_user_thread_index_start, g_ut_array and g_ut_array_mutex static 00:01:35.873 3731556bd lvol: declare g_lvol_if static 00:01:35.873 f8404a2d4 nvme: declare g_current_transport_index and g_spdk_transports static 00:01:35.873 34efb6523 dma: declare g_dma_mutex and g_dma_memory_domains static 00:01:35.895 [Pipeline] writeFile 00:01:35.927 [Pipeline] sh 00:01:36.209 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:36.220 [Pipeline] sh 00:01:36.499 + cat autorun-spdk.conf 00:01:36.499 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:36.499 SPDK_TEST_NVMF=1 00:01:36.499 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:36.499 SPDK_TEST_VFIOUSER=1 00:01:36.499 SPDK_TEST_USDT=1 00:01:36.499 SPDK_RUN_ASAN=1 00:01:36.499 SPDK_RUN_UBSAN=1 00:01:36.499 SPDK_TEST_NVMF_MDNS=1 00:01:36.499 NET_TYPE=virt 00:01:36.499 SPDK_JSONRPC_GO_CLIENT=1 00:01:36.499 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:36.505 RUN_NIGHTLY=1 00:01:36.507 [Pipeline] } 00:01:36.525 [Pipeline] // stage 00:01:36.541 [Pipeline] stage 00:01:36.544 [Pipeline] { (Run VM) 00:01:36.560 [Pipeline] sh 00:01:36.841 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:36.841 + echo 'Start stage prepare_nvme.sh' 00:01:36.841 Start stage prepare_nvme.sh 00:01:36.841 + [[ -n 2 ]] 00:01:36.841 + disk_prefix=ex2 00:01:36.841 + [[ -n /var/jenkins/workspace/nvmf-tcp-vg-autotest ]] 00:01:36.841 + [[ -e /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf ]] 00:01:36.841 + source /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf 00:01:36.841 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:36.841 ++ SPDK_TEST_NVMF=1 00:01:36.841 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:36.841 ++ SPDK_TEST_VFIOUSER=1 00:01:36.841 ++ SPDK_TEST_USDT=1 00:01:36.841 ++ SPDK_RUN_ASAN=1 00:01:36.841 ++ SPDK_RUN_UBSAN=1 00:01:36.841 ++ SPDK_TEST_NVMF_MDNS=1 00:01:36.841 ++ NET_TYPE=virt 00:01:36.841 ++ SPDK_JSONRPC_GO_CLIENT=1 00:01:36.841 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:36.841 ++ RUN_NIGHTLY=1 00:01:36.841 + cd /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:01:36.841 + nvme_files=() 00:01:36.841 + declare -A nvme_files 00:01:36.841 + backend_dir=/var/lib/libvirt/images/backends 00:01:36.841 + nvme_files['nvme.img']=5G 00:01:36.841 + nvme_files['nvme-cmb.img']=5G 00:01:36.841 + nvme_files['nvme-multi0.img']=4G 00:01:36.841 + nvme_files['nvme-multi1.img']=4G 00:01:36.841 + nvme_files['nvme-multi2.img']=4G 00:01:36.841 + nvme_files['nvme-openstack.img']=8G 00:01:36.841 + nvme_files['nvme-zns.img']=5G 00:01:36.841 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:36.841 + (( SPDK_TEST_FTL == 1 )) 00:01:36.841 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:36.841 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:36.841 + for nvme in "${!nvme_files[@]}" 00:01:36.841 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi2.img -s 4G 00:01:36.841 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:36.841 + for nvme in "${!nvme_files[@]}" 00:01:36.841 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-cmb.img -s 5G 00:01:36.841 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:36.841 + for nvme in "${!nvme_files[@]}" 00:01:36.841 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-openstack.img -s 8G 00:01:36.841 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:36.841 + for nvme in "${!nvme_files[@]}" 00:01:36.841 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-zns.img -s 5G 00:01:36.841 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:36.841 + for nvme in "${!nvme_files[@]}" 00:01:36.841 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi1.img -s 4G 00:01:36.841 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:36.841 + for nvme in "${!nvme_files[@]}" 00:01:36.841 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi0.img -s 4G 00:01:36.841 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:36.841 + for nvme in "${!nvme_files[@]}" 00:01:36.841 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme.img -s 5G 00:01:37.099 Formatting '/var/lib/libvirt/images/backends/ex2-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:37.099 ++ sudo grep -rl ex2-nvme.img /etc/libvirt/qemu 00:01:37.099 + echo 'End stage prepare_nvme.sh' 00:01:37.099 End stage prepare_nvme.sh 00:01:37.111 [Pipeline] sh 00:01:37.392 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:37.392 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex2-nvme.img -b /var/lib/libvirt/images/backends/ex2-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img -H -a -v -f fedora38 00:01:37.392 00:01:37.392 DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant 00:01:37.392 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk 00:01:37.392 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-vg-autotest 00:01:37.392 HELP=0 00:01:37.392 DRY_RUN=0 00:01:37.392 NVME_FILE=/var/lib/libvirt/images/backends/ex2-nvme.img,/var/lib/libvirt/images/backends/ex2-nvme-multi0.img, 00:01:37.392 NVME_DISKS_TYPE=nvme,nvme, 00:01:37.392 NVME_AUTO_CREATE=0 00:01:37.392 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img, 00:01:37.392 NVME_CMB=,, 00:01:37.392 NVME_PMR=,, 00:01:37.392 NVME_ZNS=,, 00:01:37.392 NVME_MS=,, 00:01:37.392 NVME_FDP=,, 00:01:37.392 SPDK_VAGRANT_DISTRO=fedora38 00:01:37.392 SPDK_VAGRANT_VMCPU=10 00:01:37.392 SPDK_VAGRANT_VMRAM=12288 00:01:37.392 SPDK_VAGRANT_PROVIDER=libvirt 00:01:37.392 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:37.392 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:37.392 SPDK_OPENSTACK_NETWORK=0 00:01:37.392 VAGRANT_PACKAGE_BOX=0 00:01:37.392 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:37.392 FORCE_DISTRO=true 00:01:37.393 VAGRANT_BOX_VERSION= 00:01:37.393 EXTRA_VAGRANTFILES= 00:01:37.393 NIC_MODEL=e1000 00:01:37.393 00:01:37.393 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt' 00:01:37.393 /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:01:40.676 Bringing machine 'default' up with 'libvirt' provider... 00:01:41.610 ==> default: Creating image (snapshot of base box volume). 00:01:41.610 ==> default: Creating domain with the following settings... 00:01:41.610 ==> default: -- Name: fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721671733_d70f6b0b923fc10b7377 00:01:41.610 ==> default: -- Domain type: kvm 00:01:41.610 ==> default: -- Cpus: 10 00:01:41.610 ==> default: -- Feature: acpi 00:01:41.610 ==> default: -- Feature: apic 00:01:41.610 ==> default: -- Feature: pae 00:01:41.610 ==> default: -- Memory: 12288M 00:01:41.610 ==> default: -- Memory Backing: hugepages: 00:01:41.610 ==> default: -- Management MAC: 00:01:41.610 ==> default: -- Loader: 00:01:41.610 ==> default: -- Nvram: 00:01:41.610 ==> default: -- Base box: spdk/fedora38 00:01:41.610 ==> default: -- Storage pool: default 00:01:41.610 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721671733_d70f6b0b923fc10b7377.img (20G) 00:01:41.610 ==> default: -- Volume Cache: default 00:01:41.610 ==> default: -- Kernel: 00:01:41.610 ==> default: -- Initrd: 00:01:41.610 ==> default: -- Graphics Type: vnc 00:01:41.610 ==> default: -- Graphics Port: -1 00:01:41.610 ==> default: -- Graphics IP: 127.0.0.1 00:01:41.610 ==> default: -- Graphics Password: Not defined 00:01:41.610 ==> default: -- Video Type: cirrus 00:01:41.610 ==> default: -- Video VRAM: 9216 00:01:41.610 ==> default: -- Sound Type: 00:01:41.610 ==> default: -- Keymap: en-us 00:01:41.610 ==> default: -- TPM Path: 00:01:41.610 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:41.610 ==> default: -- Command line args: 00:01:41.610 ==> default: -> value=-device, 00:01:41.610 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:41.610 ==> default: -> value=-drive, 00:01:41.610 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme.img,if=none,id=nvme-0-drive0, 00:01:41.610 ==> default: -> value=-device, 00:01:41.610 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:41.610 ==> default: -> value=-device, 00:01:41.610 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:41.610 ==> default: -> value=-drive, 00:01:41.610 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:41.610 ==> default: -> value=-device, 00:01:41.610 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:41.610 ==> default: -> value=-drive, 00:01:41.610 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:41.610 ==> default: -> value=-device, 00:01:41.610 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:41.610 ==> default: -> value=-drive, 00:01:41.610 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:41.610 ==> default: -> value=-device, 00:01:41.610 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:41.962 ==> default: Creating shared folders metadata... 00:01:41.962 ==> default: Starting domain. 00:01:44.488 ==> default: Waiting for domain to get an IP address... 00:02:02.562 ==> default: Waiting for SSH to become available... 00:02:02.562 ==> default: Configuring and enabling network interfaces... 00:02:05.843 default: SSH address: 192.168.121.195:22 00:02:05.843 default: SSH username: vagrant 00:02:05.843 default: SSH auth method: private key 00:02:08.373 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:16.484 ==> default: Mounting SSHFS shared folder... 00:02:17.860 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:02:17.860 ==> default: Checking Mount.. 00:02:18.793 ==> default: Folder Successfully Mounted! 00:02:18.793 ==> default: Running provisioner: file... 00:02:19.726 default: ~/.gitconfig => .gitconfig 00:02:20.292 00:02:20.292 SUCCESS! 00:02:20.292 00:02:20.292 cd to /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt and type "vagrant ssh" to use. 00:02:20.292 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:20.292 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt" to destroy all trace of vm. 00:02:20.292 00:02:20.300 [Pipeline] } 00:02:20.318 [Pipeline] // stage 00:02:20.326 [Pipeline] dir 00:02:20.327 Running in /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt 00:02:20.329 [Pipeline] { 00:02:20.340 [Pipeline] catchError 00:02:20.341 [Pipeline] { 00:02:20.355 [Pipeline] sh 00:02:20.633 + vagrant ssh-config --host vagrant 00:02:20.633 + sed -ne /^Host/,$p 00:02:20.633 + tee ssh_conf 00:02:24.860 Host vagrant 00:02:24.860 HostName 192.168.121.195 00:02:24.860 User vagrant 00:02:24.860 Port 22 00:02:24.860 UserKnownHostsFile /dev/null 00:02:24.860 StrictHostKeyChecking no 00:02:24.860 PasswordAuthentication no 00:02:24.860 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1716830599-074-updated-1705279005/libvirt/fedora38 00:02:24.860 IdentitiesOnly yes 00:02:24.860 LogLevel FATAL 00:02:24.860 ForwardAgent yes 00:02:24.860 ForwardX11 yes 00:02:24.860 00:02:24.881 [Pipeline] withEnv 00:02:24.884 [Pipeline] { 00:02:24.906 [Pipeline] sh 00:02:25.190 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:25.190 source /etc/os-release 00:02:25.190 [[ -e /image.version ]] && img=$(< /image.version) 00:02:25.190 # Minimal, systemd-like check. 00:02:25.190 if [[ -e /.dockerenv ]]; then 00:02:25.190 # Clear garbage from the node's name: 00:02:25.190 # agt-er_autotest_547-896 -> autotest_547-896 00:02:25.190 # $HOSTNAME is the actual container id 00:02:25.190 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:25.190 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:25.190 # We can assume this is a mount from a host where container is running, 00:02:25.190 # so fetch its hostname to easily identify the target swarm worker. 00:02:25.190 container="$(< /etc/hostname) ($agent)" 00:02:25.190 else 00:02:25.190 # Fallback 00:02:25.190 container=$agent 00:02:25.190 fi 00:02:25.190 fi 00:02:25.190 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:25.190 00:02:25.200 [Pipeline] } 00:02:25.225 [Pipeline] // withEnv 00:02:25.234 [Pipeline] setCustomBuildProperty 00:02:25.254 [Pipeline] stage 00:02:25.256 [Pipeline] { (Tests) 00:02:25.275 [Pipeline] sh 00:02:25.548 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:25.819 [Pipeline] sh 00:02:26.095 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:26.435 [Pipeline] timeout 00:02:26.436 Timeout set to expire in 40 min 00:02:26.438 [Pipeline] { 00:02:26.454 [Pipeline] sh 00:02:26.732 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:27.298 HEAD is now at f7b31b2b9 log: declare g_deprecation_epoch static 00:02:27.312 [Pipeline] sh 00:02:27.589 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:27.860 [Pipeline] sh 00:02:28.139 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:28.155 [Pipeline] sh 00:02:28.434 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-vg-autotest ./autoruner.sh spdk_repo 00:02:28.692 ++ readlink -f spdk_repo 00:02:28.692 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:28.692 + [[ -n /home/vagrant/spdk_repo ]] 00:02:28.692 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:28.692 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:28.692 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:28.692 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:28.692 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:28.692 + [[ nvmf-tcp-vg-autotest == pkgdep-* ]] 00:02:28.692 + cd /home/vagrant/spdk_repo 00:02:28.692 + source /etc/os-release 00:02:28.692 ++ NAME='Fedora Linux' 00:02:28.692 ++ VERSION='38 (Cloud Edition)' 00:02:28.692 ++ ID=fedora 00:02:28.692 ++ VERSION_ID=38 00:02:28.692 ++ VERSION_CODENAME= 00:02:28.692 ++ PLATFORM_ID=platform:f38 00:02:28.692 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:02:28.692 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:28.692 ++ LOGO=fedora-logo-icon 00:02:28.692 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:02:28.692 ++ HOME_URL=https://fedoraproject.org/ 00:02:28.692 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:02:28.692 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:28.692 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:28.692 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:28.692 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:02:28.692 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:28.692 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:02:28.692 ++ SUPPORT_END=2024-05-14 00:02:28.692 ++ VARIANT='Cloud Edition' 00:02:28.692 ++ VARIANT_ID=cloud 00:02:28.692 + uname -a 00:02:28.692 Linux fedora38-cloud-1716830599-074-updated-1705279005 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:02:28.692 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:28.950 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:28.950 Hugepages 00:02:28.950 node hugesize free / total 00:02:29.208 node0 1048576kB 0 / 0 00:02:29.208 node0 2048kB 0 / 0 00:02:29.208 00:02:29.208 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:29.208 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:29.208 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:29.208 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:29.208 + rm -f /tmp/spdk-ld-path 00:02:29.208 + source autorun-spdk.conf 00:02:29.208 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:29.208 ++ SPDK_TEST_NVMF=1 00:02:29.208 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:29.208 ++ SPDK_TEST_VFIOUSER=1 00:02:29.208 ++ SPDK_TEST_USDT=1 00:02:29.208 ++ SPDK_RUN_ASAN=1 00:02:29.208 ++ SPDK_RUN_UBSAN=1 00:02:29.208 ++ SPDK_TEST_NVMF_MDNS=1 00:02:29.208 ++ NET_TYPE=virt 00:02:29.208 ++ SPDK_JSONRPC_GO_CLIENT=1 00:02:29.208 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:29.208 ++ RUN_NIGHTLY=1 00:02:29.208 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:29.208 + [[ -n '' ]] 00:02:29.208 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:29.209 + for M in /var/spdk/build-*-manifest.txt 00:02:29.209 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:29.209 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:29.209 + for M in /var/spdk/build-*-manifest.txt 00:02:29.209 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:29.209 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:29.209 ++ uname 00:02:29.209 + [[ Linux == \L\i\n\u\x ]] 00:02:29.209 + sudo dmesg -T 00:02:29.209 + sudo dmesg --clear 00:02:29.209 + dmesg_pid=5166 00:02:29.209 + sudo dmesg -Tw 00:02:29.209 + [[ Fedora Linux == FreeBSD ]] 00:02:29.209 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:29.209 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:29.209 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:29.209 + [[ -x /usr/src/fio-static/fio ]] 00:02:29.209 + export FIO_BIN=/usr/src/fio-static/fio 00:02:29.209 + FIO_BIN=/usr/src/fio-static/fio 00:02:29.209 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:29.209 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:29.209 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:29.209 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:29.209 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:29.209 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:29.209 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:29.209 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:29.209 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:29.209 Test configuration: 00:02:29.209 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:29.209 SPDK_TEST_NVMF=1 00:02:29.209 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:29.209 SPDK_TEST_VFIOUSER=1 00:02:29.209 SPDK_TEST_USDT=1 00:02:29.209 SPDK_RUN_ASAN=1 00:02:29.209 SPDK_RUN_UBSAN=1 00:02:29.209 SPDK_TEST_NVMF_MDNS=1 00:02:29.209 NET_TYPE=virt 00:02:29.209 SPDK_JSONRPC_GO_CLIENT=1 00:02:29.209 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:29.471 RUN_NIGHTLY=1 18:09:41 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:29.471 18:09:41 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:29.471 18:09:41 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:29.471 18:09:41 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:29.471 18:09:41 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:29.471 18:09:41 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:29.471 18:09:41 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:29.471 18:09:41 -- paths/export.sh@5 -- $ export PATH 00:02:29.471 18:09:41 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:29.471 18:09:41 -- common/autobuild_common.sh@446 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:29.471 18:09:41 -- common/autobuild_common.sh@447 -- $ date +%s 00:02:29.471 18:09:41 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721671781.XXXXXX 00:02:29.471 18:09:41 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721671781.klQ3gk 00:02:29.471 18:09:41 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:02:29.471 18:09:41 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:02:29.471 18:09:41 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:29.471 18:09:41 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:29.471 18:09:41 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:29.471 18:09:41 -- common/autobuild_common.sh@463 -- $ get_config_params 00:02:29.471 18:09:41 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:02:29.471 18:09:41 -- common/autotest_common.sh@10 -- $ set +x 00:02:29.471 18:09:41 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-vfio-user --with-avahi --with-golang' 00:02:29.471 18:09:41 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:02:29.471 18:09:41 -- pm/common@17 -- $ local monitor 00:02:29.471 18:09:41 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:29.471 18:09:41 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:29.471 18:09:41 -- pm/common@25 -- $ sleep 1 00:02:29.471 18:09:41 -- pm/common@21 -- $ date +%s 00:02:29.471 18:09:41 -- pm/common@21 -- $ date +%s 00:02:29.471 18:09:41 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721671781 00:02:29.471 18:09:41 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721671781 00:02:29.471 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721671781_collect-vmstat.pm.log 00:02:29.471 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721671781_collect-cpu-load.pm.log 00:02:30.405 18:09:42 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:02:30.405 18:09:42 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:30.405 18:09:42 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:30.405 18:09:42 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:30.405 18:09:42 -- spdk/autobuild.sh@16 -- $ date -u 00:02:30.405 Mon Jul 22 06:09:42 PM UTC 2024 00:02:30.405 18:09:42 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:30.405 v24.09-pre-297-gf7b31b2b9 00:02:30.405 18:09:42 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:30.405 18:09:42 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:30.405 18:09:42 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:02:30.405 18:09:42 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:30.405 18:09:42 -- common/autotest_common.sh@10 -- $ set +x 00:02:30.405 ************************************ 00:02:30.405 START TEST asan 00:02:30.405 ************************************ 00:02:30.405 using asan 00:02:30.405 18:09:42 asan -- common/autotest_common.sh@1123 -- $ echo 'using asan' 00:02:30.405 00:02:30.405 real 0m0.000s 00:02:30.405 user 0m0.000s 00:02:30.405 sys 0m0.000s 00:02:30.405 18:09:42 asan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:30.405 18:09:42 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:30.405 ************************************ 00:02:30.405 END TEST asan 00:02:30.405 ************************************ 00:02:30.405 18:09:42 -- common/autotest_common.sh@1142 -- $ return 0 00:02:30.405 18:09:42 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:30.405 18:09:42 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:30.405 18:09:42 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:02:30.405 18:09:42 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:30.405 18:09:42 -- common/autotest_common.sh@10 -- $ set +x 00:02:30.405 ************************************ 00:02:30.405 START TEST ubsan 00:02:30.405 ************************************ 00:02:30.405 using ubsan 00:02:30.405 18:09:42 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:02:30.405 00:02:30.405 real 0m0.000s 00:02:30.405 user 0m0.000s 00:02:30.405 sys 0m0.000s 00:02:30.405 18:09:42 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:30.405 ************************************ 00:02:30.405 END TEST ubsan 00:02:30.405 ************************************ 00:02:30.405 18:09:42 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:30.663 18:09:42 -- common/autotest_common.sh@1142 -- $ return 0 00:02:30.663 18:09:42 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:30.663 18:09:42 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:30.663 18:09:42 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:30.663 18:09:42 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:30.663 18:09:42 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:30.663 18:09:42 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:30.663 18:09:42 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:30.663 18:09:42 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:30.663 18:09:42 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-vfio-user --with-avahi --with-golang --with-shared 00:02:30.663 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:30.663 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:31.229 Using 'verbs' RDMA provider 00:02:47.042 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:59.242 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:59.242 go version go1.21.1 linux/amd64 00:02:59.242 Creating mk/config.mk...done. 00:02:59.242 Creating mk/cc.flags.mk...done. 00:02:59.242 Type 'make' to build. 00:02:59.242 18:10:10 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:02:59.242 18:10:10 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:02:59.242 18:10:10 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:59.242 18:10:10 -- common/autotest_common.sh@10 -- $ set +x 00:02:59.242 ************************************ 00:02:59.242 START TEST make 00:02:59.242 ************************************ 00:02:59.242 18:10:10 make -- common/autotest_common.sh@1123 -- $ make -j10 00:02:59.500 make[1]: Nothing to be done for 'all'. 00:03:00.873 The Meson build system 00:03:00.873 Version: 1.3.1 00:03:00.873 Source dir: /home/vagrant/spdk_repo/spdk/libvfio-user 00:03:00.873 Build dir: /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:03:00.873 Build type: native build 00:03:00.873 Project name: libvfio-user 00:03:00.873 Project version: 0.0.1 00:03:00.873 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:03:00.873 C linker for the host machine: cc ld.bfd 2.39-16 00:03:00.873 Host machine cpu family: x86_64 00:03:00.873 Host machine cpu: x86_64 00:03:00.873 Run-time dependency threads found: YES 00:03:00.873 Library dl found: YES 00:03:00.873 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:03:00.873 Run-time dependency json-c found: YES 0.17 00:03:00.873 Run-time dependency cmocka found: YES 1.1.7 00:03:00.873 Program pytest-3 found: NO 00:03:00.873 Program flake8 found: NO 00:03:00.873 Program misspell-fixer found: NO 00:03:00.873 Program restructuredtext-lint found: NO 00:03:00.873 Program valgrind found: YES (/usr/bin/valgrind) 00:03:00.873 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:00.873 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:00.873 Compiler for C supports arguments -Wwrite-strings: YES 00:03:00.873 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:00.873 Program test-lspci.sh found: YES (/home/vagrant/spdk_repo/spdk/libvfio-user/test/test-lspci.sh) 00:03:00.873 Program test-linkage.sh found: YES (/home/vagrant/spdk_repo/spdk/libvfio-user/test/test-linkage.sh) 00:03:00.873 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:00.873 Build targets in project: 8 00:03:00.873 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:03:00.873 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:03:00.873 00:03:00.873 libvfio-user 0.0.1 00:03:00.873 00:03:00.873 User defined options 00:03:00.873 buildtype : debug 00:03:00.873 default_library: shared 00:03:00.873 libdir : /usr/local/lib 00:03:00.873 00:03:00.873 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:01.444 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug' 00:03:01.703 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:03:01.703 [2/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:03:01.703 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:03:01.703 [4/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:03:01.703 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:03:01.703 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:03:01.703 [7/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:03:01.703 [8/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:03:01.961 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:03:01.961 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:03:01.961 [11/37] Compiling C object samples/lspci.p/lspci.c.o 00:03:01.961 [12/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:03:01.961 [13/37] Compiling C object samples/null.p/null.c.o 00:03:01.961 [14/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:03:01.961 [15/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:03:01.961 [16/37] Compiling C object samples/server.p/server.c.o 00:03:01.961 [17/37] Compiling C object test/unit_tests.p/mocks.c.o 00:03:01.961 [18/37] Compiling C object samples/client.p/client.c.o 00:03:01.961 [19/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:03:01.961 [20/37] Linking target samples/client 00:03:01.961 [21/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:03:01.961 [22/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:03:01.961 [23/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:03:02.219 [24/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:03:02.219 [25/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:03:02.219 [26/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:03:02.219 [27/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:03:02.219 [28/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:03:02.219 [29/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:03:02.219 [30/37] Linking target test/unit_tests 00:03:02.219 [31/37] Linking target lib/libvfio-user.so.0.0.1 00:03:02.476 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:03:02.476 [33/37] Linking target samples/gpio-pci-idio-16 00:03:02.476 [34/37] Linking target samples/null 00:03:02.476 [35/37] Linking target samples/server 00:03:02.476 [36/37] Linking target samples/lspci 00:03:02.476 [37/37] Linking target samples/shadow_ioeventfd_server 00:03:02.476 INFO: autodetecting backend as ninja 00:03:02.476 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:03:02.733 DESTDIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user meson install --quiet -C /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:03:02.991 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug' 00:03:02.991 ninja: no work to do. 00:03:15.271 The Meson build system 00:03:15.271 Version: 1.3.1 00:03:15.271 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:03:15.271 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:03:15.271 Build type: native build 00:03:15.271 Program cat found: YES (/usr/bin/cat) 00:03:15.271 Project name: DPDK 00:03:15.271 Project version: 24.03.0 00:03:15.271 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:03:15.271 C linker for the host machine: cc ld.bfd 2.39-16 00:03:15.271 Host machine cpu family: x86_64 00:03:15.271 Host machine cpu: x86_64 00:03:15.271 Message: ## Building in Developer Mode ## 00:03:15.271 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:15.271 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:03:15.271 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:03:15.271 Program python3 found: YES (/usr/bin/python3) 00:03:15.271 Program cat found: YES (/usr/bin/cat) 00:03:15.271 Compiler for C supports arguments -march=native: YES 00:03:15.271 Checking for size of "void *" : 8 00:03:15.271 Checking for size of "void *" : 8 (cached) 00:03:15.271 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:03:15.271 Library m found: YES 00:03:15.271 Library numa found: YES 00:03:15.271 Has header "numaif.h" : YES 00:03:15.271 Library fdt found: NO 00:03:15.271 Library execinfo found: NO 00:03:15.271 Has header "execinfo.h" : YES 00:03:15.271 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:03:15.271 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:15.271 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:15.271 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:15.271 Run-time dependency openssl found: YES 3.0.9 00:03:15.271 Run-time dependency libpcap found: YES 1.10.4 00:03:15.271 Has header "pcap.h" with dependency libpcap: YES 00:03:15.271 Compiler for C supports arguments -Wcast-qual: YES 00:03:15.271 Compiler for C supports arguments -Wdeprecated: YES 00:03:15.271 Compiler for C supports arguments -Wformat: YES 00:03:15.271 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:15.271 Compiler for C supports arguments -Wformat-security: NO 00:03:15.271 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:15.271 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:15.271 Compiler for C supports arguments -Wnested-externs: YES 00:03:15.271 Compiler for C supports arguments -Wold-style-definition: YES 00:03:15.271 Compiler for C supports arguments -Wpointer-arith: YES 00:03:15.271 Compiler for C supports arguments -Wsign-compare: YES 00:03:15.271 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:15.271 Compiler for C supports arguments -Wundef: YES 00:03:15.271 Compiler for C supports arguments -Wwrite-strings: YES 00:03:15.271 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:15.271 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:15.271 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:15.271 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:15.271 Program objdump found: YES (/usr/bin/objdump) 00:03:15.271 Compiler for C supports arguments -mavx512f: YES 00:03:15.271 Checking if "AVX512 checking" compiles: YES 00:03:15.271 Fetching value of define "__SSE4_2__" : 1 00:03:15.271 Fetching value of define "__AES__" : 1 00:03:15.271 Fetching value of define "__AVX__" : 1 00:03:15.271 Fetching value of define "__AVX2__" : 1 00:03:15.271 Fetching value of define "__AVX512BW__" : (undefined) 00:03:15.271 Fetching value of define "__AVX512CD__" : (undefined) 00:03:15.271 Fetching value of define "__AVX512DQ__" : (undefined) 00:03:15.271 Fetching value of define "__AVX512F__" : (undefined) 00:03:15.271 Fetching value of define "__AVX512VL__" : (undefined) 00:03:15.271 Fetching value of define "__PCLMUL__" : 1 00:03:15.271 Fetching value of define "__RDRND__" : 1 00:03:15.271 Fetching value of define "__RDSEED__" : 1 00:03:15.271 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:03:15.271 Fetching value of define "__znver1__" : (undefined) 00:03:15.271 Fetching value of define "__znver2__" : (undefined) 00:03:15.271 Fetching value of define "__znver3__" : (undefined) 00:03:15.271 Fetching value of define "__znver4__" : (undefined) 00:03:15.271 Library asan found: YES 00:03:15.271 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:15.271 Message: lib/log: Defining dependency "log" 00:03:15.271 Message: lib/kvargs: Defining dependency "kvargs" 00:03:15.271 Message: lib/telemetry: Defining dependency "telemetry" 00:03:15.271 Library rt found: YES 00:03:15.271 Checking for function "getentropy" : NO 00:03:15.271 Message: lib/eal: Defining dependency "eal" 00:03:15.271 Message: lib/ring: Defining dependency "ring" 00:03:15.271 Message: lib/rcu: Defining dependency "rcu" 00:03:15.271 Message: lib/mempool: Defining dependency "mempool" 00:03:15.271 Message: lib/mbuf: Defining dependency "mbuf" 00:03:15.271 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:15.271 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:03:15.271 Compiler for C supports arguments -mpclmul: YES 00:03:15.271 Compiler for C supports arguments -maes: YES 00:03:15.271 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:15.271 Compiler for C supports arguments -mavx512bw: YES 00:03:15.271 Compiler for C supports arguments -mavx512dq: YES 00:03:15.271 Compiler for C supports arguments -mavx512vl: YES 00:03:15.271 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:15.271 Compiler for C supports arguments -mavx2: YES 00:03:15.271 Compiler for C supports arguments -mavx: YES 00:03:15.271 Message: lib/net: Defining dependency "net" 00:03:15.271 Message: lib/meter: Defining dependency "meter" 00:03:15.271 Message: lib/ethdev: Defining dependency "ethdev" 00:03:15.271 Message: lib/pci: Defining dependency "pci" 00:03:15.271 Message: lib/cmdline: Defining dependency "cmdline" 00:03:15.271 Message: lib/hash: Defining dependency "hash" 00:03:15.271 Message: lib/timer: Defining dependency "timer" 00:03:15.271 Message: lib/compressdev: Defining dependency "compressdev" 00:03:15.271 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:15.271 Message: lib/dmadev: Defining dependency "dmadev" 00:03:15.271 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:15.271 Message: lib/power: Defining dependency "power" 00:03:15.271 Message: lib/reorder: Defining dependency "reorder" 00:03:15.271 Message: lib/security: Defining dependency "security" 00:03:15.271 Has header "linux/userfaultfd.h" : YES 00:03:15.271 Has header "linux/vduse.h" : YES 00:03:15.271 Message: lib/vhost: Defining dependency "vhost" 00:03:15.271 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:15.271 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:15.271 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:15.271 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:15.271 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:03:15.271 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:03:15.271 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:03:15.271 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:03:15.271 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:03:15.271 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:03:15.271 Program doxygen found: YES (/usr/bin/doxygen) 00:03:15.271 Configuring doxy-api-html.conf using configuration 00:03:15.271 Configuring doxy-api-man.conf using configuration 00:03:15.271 Program mandb found: YES (/usr/bin/mandb) 00:03:15.271 Program sphinx-build found: NO 00:03:15.271 Configuring rte_build_config.h using configuration 00:03:15.271 Message: 00:03:15.271 ================= 00:03:15.271 Applications Enabled 00:03:15.271 ================= 00:03:15.271 00:03:15.271 apps: 00:03:15.271 00:03:15.271 00:03:15.271 Message: 00:03:15.271 ================= 00:03:15.271 Libraries Enabled 00:03:15.271 ================= 00:03:15.271 00:03:15.271 libs: 00:03:15.271 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:03:15.271 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:03:15.271 cryptodev, dmadev, power, reorder, security, vhost, 00:03:15.271 00:03:15.271 Message: 00:03:15.271 =============== 00:03:15.271 Drivers Enabled 00:03:15.271 =============== 00:03:15.271 00:03:15.271 common: 00:03:15.271 00:03:15.271 bus: 00:03:15.271 pci, vdev, 00:03:15.271 mempool: 00:03:15.271 ring, 00:03:15.271 dma: 00:03:15.271 00:03:15.271 net: 00:03:15.272 00:03:15.272 crypto: 00:03:15.272 00:03:15.272 compress: 00:03:15.272 00:03:15.272 vdpa: 00:03:15.272 00:03:15.272 00:03:15.272 Message: 00:03:15.272 ================= 00:03:15.272 Content Skipped 00:03:15.272 ================= 00:03:15.272 00:03:15.272 apps: 00:03:15.272 dumpcap: explicitly disabled via build config 00:03:15.272 graph: explicitly disabled via build config 00:03:15.272 pdump: explicitly disabled via build config 00:03:15.272 proc-info: explicitly disabled via build config 00:03:15.272 test-acl: explicitly disabled via build config 00:03:15.272 test-bbdev: explicitly disabled via build config 00:03:15.272 test-cmdline: explicitly disabled via build config 00:03:15.272 test-compress-perf: explicitly disabled via build config 00:03:15.272 test-crypto-perf: explicitly disabled via build config 00:03:15.272 test-dma-perf: explicitly disabled via build config 00:03:15.272 test-eventdev: explicitly disabled via build config 00:03:15.272 test-fib: explicitly disabled via build config 00:03:15.272 test-flow-perf: explicitly disabled via build config 00:03:15.272 test-gpudev: explicitly disabled via build config 00:03:15.272 test-mldev: explicitly disabled via build config 00:03:15.272 test-pipeline: explicitly disabled via build config 00:03:15.272 test-pmd: explicitly disabled via build config 00:03:15.272 test-regex: explicitly disabled via build config 00:03:15.272 test-sad: explicitly disabled via build config 00:03:15.272 test-security-perf: explicitly disabled via build config 00:03:15.272 00:03:15.272 libs: 00:03:15.272 argparse: explicitly disabled via build config 00:03:15.272 metrics: explicitly disabled via build config 00:03:15.272 acl: explicitly disabled via build config 00:03:15.272 bbdev: explicitly disabled via build config 00:03:15.272 bitratestats: explicitly disabled via build config 00:03:15.272 bpf: explicitly disabled via build config 00:03:15.272 cfgfile: explicitly disabled via build config 00:03:15.272 distributor: explicitly disabled via build config 00:03:15.272 efd: explicitly disabled via build config 00:03:15.272 eventdev: explicitly disabled via build config 00:03:15.272 dispatcher: explicitly disabled via build config 00:03:15.272 gpudev: explicitly disabled via build config 00:03:15.272 gro: explicitly disabled via build config 00:03:15.272 gso: explicitly disabled via build config 00:03:15.272 ip_frag: explicitly disabled via build config 00:03:15.272 jobstats: explicitly disabled via build config 00:03:15.272 latencystats: explicitly disabled via build config 00:03:15.272 lpm: explicitly disabled via build config 00:03:15.272 member: explicitly disabled via build config 00:03:15.272 pcapng: explicitly disabled via build config 00:03:15.272 rawdev: explicitly disabled via build config 00:03:15.272 regexdev: explicitly disabled via build config 00:03:15.272 mldev: explicitly disabled via build config 00:03:15.272 rib: explicitly disabled via build config 00:03:15.272 sched: explicitly disabled via build config 00:03:15.272 stack: explicitly disabled via build config 00:03:15.272 ipsec: explicitly disabled via build config 00:03:15.272 pdcp: explicitly disabled via build config 00:03:15.272 fib: explicitly disabled via build config 00:03:15.272 port: explicitly disabled via build config 00:03:15.272 pdump: explicitly disabled via build config 00:03:15.272 table: explicitly disabled via build config 00:03:15.272 pipeline: explicitly disabled via build config 00:03:15.272 graph: explicitly disabled via build config 00:03:15.272 node: explicitly disabled via build config 00:03:15.272 00:03:15.272 drivers: 00:03:15.272 common/cpt: not in enabled drivers build config 00:03:15.272 common/dpaax: not in enabled drivers build config 00:03:15.272 common/iavf: not in enabled drivers build config 00:03:15.272 common/idpf: not in enabled drivers build config 00:03:15.272 common/ionic: not in enabled drivers build config 00:03:15.272 common/mvep: not in enabled drivers build config 00:03:15.272 common/octeontx: not in enabled drivers build config 00:03:15.272 bus/auxiliary: not in enabled drivers build config 00:03:15.272 bus/cdx: not in enabled drivers build config 00:03:15.272 bus/dpaa: not in enabled drivers build config 00:03:15.272 bus/fslmc: not in enabled drivers build config 00:03:15.272 bus/ifpga: not in enabled drivers build config 00:03:15.272 bus/platform: not in enabled drivers build config 00:03:15.272 bus/uacce: not in enabled drivers build config 00:03:15.272 bus/vmbus: not in enabled drivers build config 00:03:15.272 common/cnxk: not in enabled drivers build config 00:03:15.272 common/mlx5: not in enabled drivers build config 00:03:15.272 common/nfp: not in enabled drivers build config 00:03:15.272 common/nitrox: not in enabled drivers build config 00:03:15.272 common/qat: not in enabled drivers build config 00:03:15.272 common/sfc_efx: not in enabled drivers build config 00:03:15.272 mempool/bucket: not in enabled drivers build config 00:03:15.272 mempool/cnxk: not in enabled drivers build config 00:03:15.272 mempool/dpaa: not in enabled drivers build config 00:03:15.272 mempool/dpaa2: not in enabled drivers build config 00:03:15.272 mempool/octeontx: not in enabled drivers build config 00:03:15.272 mempool/stack: not in enabled drivers build config 00:03:15.272 dma/cnxk: not in enabled drivers build config 00:03:15.272 dma/dpaa: not in enabled drivers build config 00:03:15.272 dma/dpaa2: not in enabled drivers build config 00:03:15.272 dma/hisilicon: not in enabled drivers build config 00:03:15.272 dma/idxd: not in enabled drivers build config 00:03:15.272 dma/ioat: not in enabled drivers build config 00:03:15.272 dma/skeleton: not in enabled drivers build config 00:03:15.272 net/af_packet: not in enabled drivers build config 00:03:15.272 net/af_xdp: not in enabled drivers build config 00:03:15.272 net/ark: not in enabled drivers build config 00:03:15.272 net/atlantic: not in enabled drivers build config 00:03:15.272 net/avp: not in enabled drivers build config 00:03:15.272 net/axgbe: not in enabled drivers build config 00:03:15.272 net/bnx2x: not in enabled drivers build config 00:03:15.272 net/bnxt: not in enabled drivers build config 00:03:15.272 net/bonding: not in enabled drivers build config 00:03:15.272 net/cnxk: not in enabled drivers build config 00:03:15.272 net/cpfl: not in enabled drivers build config 00:03:15.272 net/cxgbe: not in enabled drivers build config 00:03:15.272 net/dpaa: not in enabled drivers build config 00:03:15.272 net/dpaa2: not in enabled drivers build config 00:03:15.272 net/e1000: not in enabled drivers build config 00:03:15.272 net/ena: not in enabled drivers build config 00:03:15.272 net/enetc: not in enabled drivers build config 00:03:15.272 net/enetfec: not in enabled drivers build config 00:03:15.272 net/enic: not in enabled drivers build config 00:03:15.272 net/failsafe: not in enabled drivers build config 00:03:15.272 net/fm10k: not in enabled drivers build config 00:03:15.272 net/gve: not in enabled drivers build config 00:03:15.272 net/hinic: not in enabled drivers build config 00:03:15.272 net/hns3: not in enabled drivers build config 00:03:15.272 net/i40e: not in enabled drivers build config 00:03:15.272 net/iavf: not in enabled drivers build config 00:03:15.272 net/ice: not in enabled drivers build config 00:03:15.272 net/idpf: not in enabled drivers build config 00:03:15.272 net/igc: not in enabled drivers build config 00:03:15.272 net/ionic: not in enabled drivers build config 00:03:15.272 net/ipn3ke: not in enabled drivers build config 00:03:15.272 net/ixgbe: not in enabled drivers build config 00:03:15.272 net/mana: not in enabled drivers build config 00:03:15.272 net/memif: not in enabled drivers build config 00:03:15.272 net/mlx4: not in enabled drivers build config 00:03:15.272 net/mlx5: not in enabled drivers build config 00:03:15.272 net/mvneta: not in enabled drivers build config 00:03:15.272 net/mvpp2: not in enabled drivers build config 00:03:15.272 net/netvsc: not in enabled drivers build config 00:03:15.272 net/nfb: not in enabled drivers build config 00:03:15.272 net/nfp: not in enabled drivers build config 00:03:15.272 net/ngbe: not in enabled drivers build config 00:03:15.272 net/null: not in enabled drivers build config 00:03:15.272 net/octeontx: not in enabled drivers build config 00:03:15.272 net/octeon_ep: not in enabled drivers build config 00:03:15.272 net/pcap: not in enabled drivers build config 00:03:15.272 net/pfe: not in enabled drivers build config 00:03:15.272 net/qede: not in enabled drivers build config 00:03:15.272 net/ring: not in enabled drivers build config 00:03:15.272 net/sfc: not in enabled drivers build config 00:03:15.272 net/softnic: not in enabled drivers build config 00:03:15.272 net/tap: not in enabled drivers build config 00:03:15.272 net/thunderx: not in enabled drivers build config 00:03:15.272 net/txgbe: not in enabled drivers build config 00:03:15.273 net/vdev_netvsc: not in enabled drivers build config 00:03:15.273 net/vhost: not in enabled drivers build config 00:03:15.273 net/virtio: not in enabled drivers build config 00:03:15.273 net/vmxnet3: not in enabled drivers build config 00:03:15.273 raw/*: missing internal dependency, "rawdev" 00:03:15.273 crypto/armv8: not in enabled drivers build config 00:03:15.273 crypto/bcmfs: not in enabled drivers build config 00:03:15.273 crypto/caam_jr: not in enabled drivers build config 00:03:15.273 crypto/ccp: not in enabled drivers build config 00:03:15.273 crypto/cnxk: not in enabled drivers build config 00:03:15.273 crypto/dpaa_sec: not in enabled drivers build config 00:03:15.273 crypto/dpaa2_sec: not in enabled drivers build config 00:03:15.273 crypto/ipsec_mb: not in enabled drivers build config 00:03:15.273 crypto/mlx5: not in enabled drivers build config 00:03:15.273 crypto/mvsam: not in enabled drivers build config 00:03:15.273 crypto/nitrox: not in enabled drivers build config 00:03:15.273 crypto/null: not in enabled drivers build config 00:03:15.273 crypto/octeontx: not in enabled drivers build config 00:03:15.273 crypto/openssl: not in enabled drivers build config 00:03:15.273 crypto/scheduler: not in enabled drivers build config 00:03:15.273 crypto/uadk: not in enabled drivers build config 00:03:15.273 crypto/virtio: not in enabled drivers build config 00:03:15.273 compress/isal: not in enabled drivers build config 00:03:15.273 compress/mlx5: not in enabled drivers build config 00:03:15.273 compress/nitrox: not in enabled drivers build config 00:03:15.273 compress/octeontx: not in enabled drivers build config 00:03:15.273 compress/zlib: not in enabled drivers build config 00:03:15.273 regex/*: missing internal dependency, "regexdev" 00:03:15.273 ml/*: missing internal dependency, "mldev" 00:03:15.273 vdpa/ifc: not in enabled drivers build config 00:03:15.273 vdpa/mlx5: not in enabled drivers build config 00:03:15.273 vdpa/nfp: not in enabled drivers build config 00:03:15.273 vdpa/sfc: not in enabled drivers build config 00:03:15.273 event/*: missing internal dependency, "eventdev" 00:03:15.273 baseband/*: missing internal dependency, "bbdev" 00:03:15.273 gpu/*: missing internal dependency, "gpudev" 00:03:15.273 00:03:15.273 00:03:15.273 Build targets in project: 85 00:03:15.273 00:03:15.273 DPDK 24.03.0 00:03:15.273 00:03:15.273 User defined options 00:03:15.273 buildtype : debug 00:03:15.273 default_library : shared 00:03:15.273 libdir : lib 00:03:15.273 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:03:15.273 b_sanitize : address 00:03:15.273 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:03:15.273 c_link_args : 00:03:15.273 cpu_instruction_set: native 00:03:15.273 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:03:15.273 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:03:15.273 enable_docs : false 00:03:15.273 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:03:15.273 enable_kmods : false 00:03:15.273 max_lcores : 128 00:03:15.273 tests : false 00:03:15.273 00:03:15.273 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:15.839 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:03:15.839 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:03:15.839 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:15.839 [3/268] Linking static target lib/librte_kvargs.a 00:03:15.839 [4/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:15.839 [5/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:15.839 [6/268] Linking static target lib/librte_log.a 00:03:16.405 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.405 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:16.663 [9/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:16.663 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:16.919 [11/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:16.919 [12/268] Linking static target lib/librte_telemetry.a 00:03:16.919 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:16.919 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:16.919 [15/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.919 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:16.919 [17/268] Linking target lib/librte_log.so.24.1 00:03:16.919 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:17.177 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:17.177 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:17.435 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:03:17.435 [22/268] Linking target lib/librte_kvargs.so.24.1 00:03:17.693 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:17.693 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:17.693 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:17.693 [26/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:03:17.693 [27/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.949 [28/268] Linking target lib/librte_telemetry.so.24.1 00:03:17.949 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:18.207 [30/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:03:18.207 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:18.207 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:18.465 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:18.465 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:18.465 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:18.734 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:18.734 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:18.734 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:18.734 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:18.992 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:18.992 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:18.992 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:18.992 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:19.250 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:19.250 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:19.250 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:19.508 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:19.766 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:19.766 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:20.024 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:20.024 [51/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:20.024 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:20.024 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:20.024 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:20.281 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:20.281 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:20.281 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:20.281 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:20.847 [59/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:20.847 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:20.847 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:20.847 [62/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:20.847 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:21.105 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:21.105 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:21.105 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:21.105 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:21.374 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:21.636 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:21.893 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:21.893 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:21.893 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:21.893 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:21.893 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:21.893 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:22.151 [76/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:22.151 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:22.151 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:22.151 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:22.409 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:22.409 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:22.974 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:22.974 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:22.974 [84/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:22.974 [85/268] Linking static target lib/librte_eal.a 00:03:22.974 [86/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:22.974 [87/268] Linking static target lib/librte_ring.a 00:03:23.231 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:23.231 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:23.488 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:23.488 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:23.488 [92/268] Linking static target lib/librte_mempool.a 00:03:23.488 [93/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:23.488 [94/268] Linking static target lib/librte_rcu.a 00:03:23.488 [95/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:23.746 [96/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:24.042 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:24.042 [98/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:03:24.042 [99/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:03:24.321 [100/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:24.321 [101/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:24.321 [102/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:24.579 [103/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:24.579 [104/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:24.579 [105/268] Linking static target lib/librte_mbuf.a 00:03:24.579 [106/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:24.579 [107/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:24.850 [108/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:24.850 [109/268] Linking static target lib/librte_net.a 00:03:24.850 [110/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:24.850 [111/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:24.850 [112/268] Linking static target lib/librte_meter.a 00:03:25.109 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:25.366 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:25.366 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:25.366 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:25.366 [117/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:25.366 [118/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:25.931 [119/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:25.931 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:25.932 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:26.189 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:26.786 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:26.786 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:26.786 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:26.786 [126/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:26.786 [127/268] Linking static target lib/librte_pci.a 00:03:26.786 [128/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:26.786 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:27.044 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:27.044 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:27.044 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:27.044 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:27.044 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:27.302 [135/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:27.302 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:27.302 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:27.302 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:27.302 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:27.302 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:27.559 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:27.559 [142/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:03:27.559 [143/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:03:27.559 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:27.816 [145/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:27.816 [146/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:28.074 [147/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:28.074 [148/268] Linking static target lib/librte_cmdline.a 00:03:28.331 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:03:28.589 [150/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:28.589 [151/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:28.589 [152/268] Linking static target lib/librte_timer.a 00:03:28.589 [153/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:28.589 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:28.589 [155/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:28.846 [156/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:29.104 [157/268] Linking static target lib/librte_ethdev.a 00:03:29.104 [158/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:29.105 [159/268] Linking static target lib/librte_hash.a 00:03:29.105 [160/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:29.362 [161/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:29.362 [162/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:29.362 [163/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:29.362 [164/268] Linking static target lib/librte_compressdev.a 00:03:29.362 [165/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:29.633 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:29.890 [167/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:29.890 [168/268] Linking static target lib/librte_dmadev.a 00:03:29.890 [169/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:30.147 [170/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:30.147 [171/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:30.147 [172/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:30.147 [173/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:30.711 [174/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:30.711 [175/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:30.711 [176/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:30.711 [177/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:30.969 [178/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:30.969 [179/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:30.969 [180/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:30.969 [181/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:31.226 [182/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:31.226 [183/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:31.226 [184/268] Linking static target lib/librte_cryptodev.a 00:03:31.483 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:31.483 [186/268] Linking static target lib/librte_power.a 00:03:31.483 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:31.483 [188/268] Linking static target lib/librte_reorder.a 00:03:31.741 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:31.741 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:31.999 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:31.999 [192/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:31.999 [193/268] Linking static target lib/librte_security.a 00:03:32.257 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:32.515 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:32.773 [196/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:32.773 [197/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:32.773 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:32.773 [199/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:33.031 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:33.288 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:33.288 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:33.545 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:33.545 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:33.545 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:33.802 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:33.802 [207/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:33.802 [208/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:33.802 [209/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:34.058 [210/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:34.058 [211/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:34.058 [212/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:34.058 [213/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:34.058 [214/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:34.058 [215/268] Linking static target drivers/librte_bus_vdev.a 00:03:34.315 [216/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:34.315 [217/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:34.315 [218/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:34.315 [219/268] Linking static target drivers/librte_bus_pci.a 00:03:34.315 [220/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:34.315 [221/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:34.572 [222/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:34.572 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:34.572 [224/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:34.572 [225/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:34.572 [226/268] Linking static target drivers/librte_mempool_ring.a 00:03:34.832 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:35.766 [228/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:35.766 [229/268] Linking target lib/librte_eal.so.24.1 00:03:36.024 [230/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:36.024 [231/268] Linking target lib/librte_ring.so.24.1 00:03:36.024 [232/268] Linking target lib/librte_pci.so.24.1 00:03:36.024 [233/268] Linking target drivers/librte_bus_vdev.so.24.1 00:03:36.024 [234/268] Linking target lib/librte_dmadev.so.24.1 00:03:36.024 [235/268] Linking target lib/librte_meter.so.24.1 00:03:36.024 [236/268] Linking target lib/librte_timer.so.24.1 00:03:36.281 [237/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:36.281 [238/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:36.281 [239/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:36.281 [240/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:36.281 [241/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:36.281 [242/268] Linking target lib/librte_rcu.so.24.1 00:03:36.281 [243/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:36.281 [244/268] Linking target lib/librte_mempool.so.24.1 00:03:36.281 [245/268] Linking target drivers/librte_bus_pci.so.24.1 00:03:36.281 [246/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:36.539 [247/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:36.539 [248/268] Linking target drivers/librte_mempool_ring.so.24.1 00:03:36.539 [249/268] Linking target lib/librte_mbuf.so.24.1 00:03:36.539 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:36.798 [251/268] Linking target lib/librte_reorder.so.24.1 00:03:36.798 [252/268] Linking target lib/librte_cryptodev.so.24.1 00:03:36.798 [253/268] Linking target lib/librte_net.so.24.1 00:03:36.798 [254/268] Linking target lib/librte_compressdev.so.24.1 00:03:36.798 [255/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:36.798 [256/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:37.056 [257/268] Linking target lib/librte_cmdline.so.24.1 00:03:37.056 [258/268] Linking target lib/librte_hash.so.24.1 00:03:37.056 [259/268] Linking target lib/librte_security.so.24.1 00:03:37.056 [260/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:37.313 [261/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:37.571 [262/268] Linking target lib/librte_ethdev.so.24.1 00:03:37.571 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:37.830 [264/268] Linking target lib/librte_power.so.24.1 00:03:43.089 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:43.089 [266/268] Linking static target lib/librte_vhost.a 00:03:44.462 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:44.462 [268/268] Linking target lib/librte_vhost.so.24.1 00:03:44.462 INFO: autodetecting backend as ninja 00:03:44.462 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:45.835 CC lib/ut/ut.o 00:03:45.835 CC lib/log/log.o 00:03:45.835 CC lib/log/log_flags.o 00:03:45.835 CC lib/log/log_deprecated.o 00:03:45.835 CC lib/ut_mock/mock.o 00:03:45.835 LIB libspdk_log.a 00:03:45.835 LIB libspdk_ut.a 00:03:45.835 SO libspdk_ut.so.2.0 00:03:45.835 LIB libspdk_ut_mock.a 00:03:45.835 SO libspdk_log.so.7.0 00:03:45.835 SO libspdk_ut_mock.so.6.0 00:03:45.835 SYMLINK libspdk_ut.so 00:03:45.835 SYMLINK libspdk_log.so 00:03:45.835 SYMLINK libspdk_ut_mock.so 00:03:46.092 CC lib/ioat/ioat.o 00:03:46.092 CC lib/util/base64.o 00:03:46.092 CC lib/util/bit_array.o 00:03:46.092 CXX lib/trace_parser/trace.o 00:03:46.092 CC lib/dma/dma.o 00:03:46.092 CC lib/util/crc16.o 00:03:46.092 CC lib/util/cpuset.o 00:03:46.092 CC lib/util/crc32.o 00:03:46.092 CC lib/util/crc32c.o 00:03:46.351 CC lib/vfio_user/host/vfio_user_pci.o 00:03:46.351 CC lib/util/crc32_ieee.o 00:03:46.351 CC lib/util/crc64.o 00:03:46.351 CC lib/vfio_user/host/vfio_user.o 00:03:46.351 CC lib/util/dif.o 00:03:46.351 CC lib/util/fd.o 00:03:46.351 LIB libspdk_dma.a 00:03:46.351 CC lib/util/fd_group.o 00:03:46.351 SO libspdk_dma.so.4.0 00:03:46.351 CC lib/util/file.o 00:03:46.351 CC lib/util/hexlify.o 00:03:46.610 SYMLINK libspdk_dma.so 00:03:46.610 CC lib/util/iov.o 00:03:46.610 CC lib/util/math.o 00:03:46.610 CC lib/util/net.o 00:03:46.610 LIB libspdk_ioat.a 00:03:46.610 LIB libspdk_vfio_user.a 00:03:46.610 SO libspdk_ioat.so.7.0 00:03:46.610 SO libspdk_vfio_user.so.5.0 00:03:46.610 CC lib/util/pipe.o 00:03:46.610 CC lib/util/strerror_tls.o 00:03:46.610 SYMLINK libspdk_ioat.so 00:03:46.610 CC lib/util/string.o 00:03:46.610 SYMLINK libspdk_vfio_user.so 00:03:46.610 CC lib/util/uuid.o 00:03:46.610 CC lib/util/xor.o 00:03:46.610 CC lib/util/zipf.o 00:03:47.215 LIB libspdk_util.a 00:03:47.215 SO libspdk_util.so.10.0 00:03:47.215 LIB libspdk_trace_parser.a 00:03:47.473 SO libspdk_trace_parser.so.5.0 00:03:47.473 SYMLINK libspdk_util.so 00:03:47.473 SYMLINK libspdk_trace_parser.so 00:03:47.473 CC lib/rdma_provider/common.o 00:03:47.473 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:47.473 CC lib/env_dpdk/memory.o 00:03:47.473 CC lib/env_dpdk/env.o 00:03:47.473 CC lib/env_dpdk/pci.o 00:03:47.473 CC lib/conf/conf.o 00:03:47.731 CC lib/rdma_utils/rdma_utils.o 00:03:47.731 CC lib/idxd/idxd.o 00:03:47.731 CC lib/json/json_parse.o 00:03:47.731 CC lib/vmd/vmd.o 00:03:47.731 CC lib/json/json_util.o 00:03:47.731 LIB libspdk_conf.a 00:03:47.989 SO libspdk_conf.so.6.0 00:03:47.989 LIB libspdk_rdma_utils.a 00:03:47.989 SO libspdk_rdma_utils.so.1.0 00:03:47.989 LIB libspdk_rdma_provider.a 00:03:47.989 SYMLINK libspdk_conf.so 00:03:47.989 CC lib/json/json_write.o 00:03:47.989 SO libspdk_rdma_provider.so.6.0 00:03:47.989 SYMLINK libspdk_rdma_utils.so 00:03:47.989 CC lib/vmd/led.o 00:03:47.989 SYMLINK libspdk_rdma_provider.so 00:03:47.989 CC lib/env_dpdk/init.o 00:03:47.989 CC lib/idxd/idxd_user.o 00:03:48.246 CC lib/env_dpdk/threads.o 00:03:48.246 CC lib/env_dpdk/pci_ioat.o 00:03:48.246 CC lib/idxd/idxd_kernel.o 00:03:48.246 LIB libspdk_json.a 00:03:48.246 SO libspdk_json.so.6.0 00:03:48.246 CC lib/env_dpdk/pci_virtio.o 00:03:48.246 CC lib/env_dpdk/pci_vmd.o 00:03:48.246 CC lib/env_dpdk/pci_idxd.o 00:03:48.504 SYMLINK libspdk_json.so 00:03:48.504 CC lib/env_dpdk/pci_event.o 00:03:48.504 CC lib/env_dpdk/sigbus_handler.o 00:03:48.504 CC lib/env_dpdk/pci_dpdk.o 00:03:48.504 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:48.504 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:48.504 LIB libspdk_vmd.a 00:03:48.504 LIB libspdk_idxd.a 00:03:48.504 SO libspdk_vmd.so.6.0 00:03:48.504 SO libspdk_idxd.so.12.0 00:03:48.761 SYMLINK libspdk_vmd.so 00:03:48.761 SYMLINK libspdk_idxd.so 00:03:48.761 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:48.761 CC lib/jsonrpc/jsonrpc_server.o 00:03:48.761 CC lib/jsonrpc/jsonrpc_client.o 00:03:48.761 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:49.019 LIB libspdk_jsonrpc.a 00:03:49.019 SO libspdk_jsonrpc.so.6.0 00:03:49.277 SYMLINK libspdk_jsonrpc.so 00:03:49.535 CC lib/rpc/rpc.o 00:03:49.535 LIB libspdk_env_dpdk.a 00:03:49.793 SO libspdk_env_dpdk.so.15.0 00:03:49.793 LIB libspdk_rpc.a 00:03:49.793 SO libspdk_rpc.so.6.0 00:03:49.793 SYMLINK libspdk_rpc.so 00:03:49.793 SYMLINK libspdk_env_dpdk.so 00:03:50.051 CC lib/notify/notify.o 00:03:50.051 CC lib/notify/notify_rpc.o 00:03:50.051 CC lib/keyring/keyring.o 00:03:50.051 CC lib/keyring/keyring_rpc.o 00:03:50.051 CC lib/trace/trace.o 00:03:50.051 CC lib/trace/trace_flags.o 00:03:50.051 CC lib/trace/trace_rpc.o 00:03:50.310 LIB libspdk_notify.a 00:03:50.310 SO libspdk_notify.so.6.0 00:03:50.310 SYMLINK libspdk_notify.so 00:03:50.310 LIB libspdk_keyring.a 00:03:50.569 LIB libspdk_trace.a 00:03:50.569 SO libspdk_keyring.so.1.0 00:03:50.569 SO libspdk_trace.so.10.0 00:03:50.569 SYMLINK libspdk_keyring.so 00:03:50.569 SYMLINK libspdk_trace.so 00:03:50.828 CC lib/sock/sock.o 00:03:50.828 CC lib/sock/sock_rpc.o 00:03:50.828 CC lib/thread/iobuf.o 00:03:50.828 CC lib/thread/thread.o 00:03:51.395 LIB libspdk_sock.a 00:03:51.395 SO libspdk_sock.so.10.0 00:03:51.395 SYMLINK libspdk_sock.so 00:03:51.986 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:51.986 CC lib/nvme/nvme_ctrlr.o 00:03:51.986 CC lib/nvme/nvme_fabric.o 00:03:51.986 CC lib/nvme/nvme_ns.o 00:03:51.986 CC lib/nvme/nvme_pcie_common.o 00:03:51.986 CC lib/nvme/nvme_ns_cmd.o 00:03:51.986 CC lib/nvme/nvme_pcie.o 00:03:51.986 CC lib/nvme/nvme_qpair.o 00:03:51.986 CC lib/nvme/nvme.o 00:03:52.552 CC lib/nvme/nvme_quirks.o 00:03:52.552 CC lib/nvme/nvme_transport.o 00:03:52.810 CC lib/nvme/nvme_discovery.o 00:03:52.810 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:52.810 LIB libspdk_thread.a 00:03:52.810 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:52.810 SO libspdk_thread.so.10.1 00:03:53.068 CC lib/nvme/nvme_tcp.o 00:03:53.068 SYMLINK libspdk_thread.so 00:03:53.068 CC lib/nvme/nvme_opal.o 00:03:53.068 CC lib/nvme/nvme_io_msg.o 00:03:53.327 CC lib/nvme/nvme_poll_group.o 00:03:53.327 CC lib/nvme/nvme_zns.o 00:03:53.327 CC lib/nvme/nvme_stubs.o 00:03:53.585 CC lib/nvme/nvme_auth.o 00:03:53.585 CC lib/nvme/nvme_cuse.o 00:03:53.585 CC lib/nvme/nvme_vfio_user.o 00:03:53.585 CC lib/nvme/nvme_rdma.o 00:03:53.843 CC lib/accel/accel.o 00:03:53.843 CC lib/accel/accel_rpc.o 00:03:53.843 CC lib/accel/accel_sw.o 00:03:54.409 CC lib/blob/blobstore.o 00:03:54.409 CC lib/init/json_config.o 00:03:54.409 CC lib/virtio/virtio.o 00:03:54.666 CC lib/vfu_tgt/tgt_endpoint.o 00:03:54.666 CC lib/virtio/virtio_vhost_user.o 00:03:54.666 CC lib/init/subsystem.o 00:03:54.666 CC lib/virtio/virtio_vfio_user.o 00:03:54.666 CC lib/blob/request.o 00:03:54.925 CC lib/virtio/virtio_pci.o 00:03:54.925 CC lib/init/subsystem_rpc.o 00:03:54.925 CC lib/vfu_tgt/tgt_rpc.o 00:03:54.925 CC lib/init/rpc.o 00:03:54.925 CC lib/blob/zeroes.o 00:03:55.183 CC lib/blob/blob_bs_dev.o 00:03:55.183 LIB libspdk_vfu_tgt.a 00:03:55.183 LIB libspdk_accel.a 00:03:55.183 SO libspdk_vfu_tgt.so.3.0 00:03:55.183 LIB libspdk_init.a 00:03:55.183 SO libspdk_accel.so.16.0 00:03:55.183 SYMLINK libspdk_vfu_tgt.so 00:03:55.183 SO libspdk_init.so.5.0 00:03:55.183 LIB libspdk_virtio.a 00:03:55.442 SYMLINK libspdk_accel.so 00:03:55.442 SO libspdk_virtio.so.7.0 00:03:55.442 SYMLINK libspdk_init.so 00:03:55.442 LIB libspdk_nvme.a 00:03:55.442 SYMLINK libspdk_virtio.so 00:03:55.702 CC lib/bdev/bdev.o 00:03:55.702 CC lib/bdev/bdev_rpc.o 00:03:55.702 CC lib/bdev/bdev_zone.o 00:03:55.702 CC lib/bdev/part.o 00:03:55.702 CC lib/bdev/scsi_nvme.o 00:03:55.702 SO libspdk_nvme.so.13.1 00:03:55.702 CC lib/event/app.o 00:03:55.702 CC lib/event/reactor.o 00:03:55.702 CC lib/event/log_rpc.o 00:03:55.702 CC lib/event/app_rpc.o 00:03:55.960 CC lib/event/scheduler_static.o 00:03:55.960 SYMLINK libspdk_nvme.so 00:03:56.218 LIB libspdk_event.a 00:03:56.218 SO libspdk_event.so.14.0 00:03:56.475 SYMLINK libspdk_event.so 00:03:59.005 LIB libspdk_blob.a 00:03:59.005 SO libspdk_blob.so.11.0 00:03:59.005 SYMLINK libspdk_blob.so 00:03:59.005 LIB libspdk_bdev.a 00:03:59.264 CC lib/blobfs/blobfs.o 00:03:59.264 CC lib/blobfs/tree.o 00:03:59.264 CC lib/lvol/lvol.o 00:03:59.264 SO libspdk_bdev.so.16.0 00:03:59.264 SYMLINK libspdk_bdev.so 00:03:59.524 CC lib/nbd/nbd.o 00:03:59.524 CC lib/nbd/nbd_rpc.o 00:03:59.524 CC lib/nvmf/ctrlr.o 00:03:59.524 CC lib/scsi/lun.o 00:03:59.524 CC lib/scsi/port.o 00:03:59.524 CC lib/scsi/dev.o 00:03:59.524 CC lib/ftl/ftl_core.o 00:03:59.524 CC lib/ublk/ublk.o 00:03:59.781 CC lib/nvmf/ctrlr_discovery.o 00:03:59.781 CC lib/nvmf/ctrlr_bdev.o 00:04:00.098 CC lib/nvmf/subsystem.o 00:04:00.098 CC lib/scsi/scsi.o 00:04:00.098 CC lib/ftl/ftl_init.o 00:04:00.098 LIB libspdk_nbd.a 00:04:00.098 CC lib/scsi/scsi_bdev.o 00:04:00.098 SO libspdk_nbd.so.7.0 00:04:00.356 SYMLINK libspdk_nbd.so 00:04:00.356 CC lib/scsi/scsi_pr.o 00:04:00.356 LIB libspdk_blobfs.a 00:04:00.356 CC lib/ftl/ftl_layout.o 00:04:00.356 SO libspdk_blobfs.so.10.0 00:04:00.356 CC lib/ublk/ublk_rpc.o 00:04:00.356 CC lib/scsi/scsi_rpc.o 00:04:00.614 SYMLINK libspdk_blobfs.so 00:04:00.614 CC lib/ftl/ftl_debug.o 00:04:00.614 LIB libspdk_lvol.a 00:04:00.614 SO libspdk_lvol.so.10.0 00:04:00.614 CC lib/scsi/task.o 00:04:00.614 LIB libspdk_ublk.a 00:04:00.614 SYMLINK libspdk_lvol.so 00:04:00.614 CC lib/ftl/ftl_io.o 00:04:00.614 SO libspdk_ublk.so.3.0 00:04:00.614 CC lib/ftl/ftl_sb.o 00:04:00.614 CC lib/ftl/ftl_l2p.o 00:04:00.871 SYMLINK libspdk_ublk.so 00:04:00.871 CC lib/ftl/ftl_l2p_flat.o 00:04:00.871 CC lib/ftl/ftl_nv_cache.o 00:04:00.871 CC lib/nvmf/nvmf.o 00:04:00.871 CC lib/ftl/ftl_band.o 00:04:00.871 LIB libspdk_scsi.a 00:04:00.871 CC lib/ftl/ftl_band_ops.o 00:04:00.871 SO libspdk_scsi.so.9.0 00:04:00.871 CC lib/ftl/ftl_writer.o 00:04:00.871 CC lib/ftl/ftl_rq.o 00:04:01.140 CC lib/ftl/ftl_reloc.o 00:04:01.140 SYMLINK libspdk_scsi.so 00:04:01.140 CC lib/ftl/ftl_l2p_cache.o 00:04:01.140 CC lib/ftl/ftl_p2l.o 00:04:01.438 CC lib/nvmf/nvmf_rpc.o 00:04:01.438 CC lib/ftl/mngt/ftl_mngt.o 00:04:01.438 CC lib/iscsi/conn.o 00:04:01.438 CC lib/vhost/vhost.o 00:04:01.438 CC lib/iscsi/init_grp.o 00:04:01.696 CC lib/vhost/vhost_rpc.o 00:04:01.953 CC lib/vhost/vhost_scsi.o 00:04:01.953 CC lib/vhost/vhost_blk.o 00:04:01.953 CC lib/vhost/rte_vhost_user.o 00:04:01.953 CC lib/nvmf/transport.o 00:04:01.953 CC lib/iscsi/iscsi.o 00:04:01.953 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:02.211 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:02.211 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:02.211 CC lib/nvmf/tcp.o 00:04:02.469 CC lib/nvmf/stubs.o 00:04:02.469 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:02.469 CC lib/nvmf/mdns_server.o 00:04:02.469 CC lib/nvmf/vfio_user.o 00:04:02.726 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:02.726 CC lib/iscsi/md5.o 00:04:02.983 CC lib/nvmf/rdma.o 00:04:02.983 CC lib/nvmf/auth.o 00:04:02.983 CC lib/iscsi/param.o 00:04:02.983 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:02.983 CC lib/iscsi/portal_grp.o 00:04:03.240 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:03.240 LIB libspdk_vhost.a 00:04:03.240 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:03.240 SO libspdk_vhost.so.8.0 00:04:03.240 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:03.498 SYMLINK libspdk_vhost.so 00:04:03.498 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:03.498 CC lib/iscsi/tgt_node.o 00:04:03.498 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:03.498 CC lib/iscsi/iscsi_subsystem.o 00:04:03.755 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:03.755 CC lib/ftl/utils/ftl_conf.o 00:04:03.755 CC lib/iscsi/iscsi_rpc.o 00:04:04.046 CC lib/iscsi/task.o 00:04:04.046 CC lib/ftl/utils/ftl_md.o 00:04:04.046 CC lib/ftl/utils/ftl_mempool.o 00:04:04.046 CC lib/ftl/utils/ftl_bitmap.o 00:04:04.046 CC lib/ftl/utils/ftl_property.o 00:04:04.046 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:04.046 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:04.303 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:04.303 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:04.303 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:04.303 LIB libspdk_iscsi.a 00:04:04.561 SO libspdk_iscsi.so.8.0 00:04:04.561 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:04.561 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:04.561 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:04.561 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:04.561 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:04.561 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:04.561 CC lib/ftl/base/ftl_base_dev.o 00:04:04.561 SYMLINK libspdk_iscsi.so 00:04:04.561 CC lib/ftl/base/ftl_base_bdev.o 00:04:04.561 CC lib/ftl/ftl_trace.o 00:04:04.821 LIB libspdk_ftl.a 00:04:05.388 SO libspdk_ftl.so.9.0 00:04:05.646 SYMLINK libspdk_ftl.so 00:04:05.904 LIB libspdk_nvmf.a 00:04:06.162 SO libspdk_nvmf.so.19.0 00:04:06.420 SYMLINK libspdk_nvmf.so 00:04:06.700 CC module/vfu_device/vfu_virtio.o 00:04:06.959 CC module/env_dpdk/env_dpdk_rpc.o 00:04:06.959 CC module/keyring/linux/keyring.o 00:04:06.959 CC module/blob/bdev/blob_bdev.o 00:04:06.959 CC module/accel/error/accel_error.o 00:04:06.959 CC module/accel/dsa/accel_dsa.o 00:04:06.959 CC module/keyring/file/keyring.o 00:04:06.959 CC module/sock/posix/posix.o 00:04:06.959 CC module/accel/ioat/accel_ioat.o 00:04:06.959 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:06.959 LIB libspdk_env_dpdk_rpc.a 00:04:06.959 SO libspdk_env_dpdk_rpc.so.6.0 00:04:06.959 SYMLINK libspdk_env_dpdk_rpc.so 00:04:07.218 CC module/accel/ioat/accel_ioat_rpc.o 00:04:07.218 CC module/keyring/file/keyring_rpc.o 00:04:07.218 CC module/keyring/linux/keyring_rpc.o 00:04:07.218 CC module/accel/error/accel_error_rpc.o 00:04:07.218 LIB libspdk_scheduler_dynamic.a 00:04:07.218 SO libspdk_scheduler_dynamic.so.4.0 00:04:07.218 CC module/accel/dsa/accel_dsa_rpc.o 00:04:07.218 LIB libspdk_accel_ioat.a 00:04:07.218 SYMLINK libspdk_scheduler_dynamic.so 00:04:07.218 LIB libspdk_keyring_file.a 00:04:07.218 LIB libspdk_blob_bdev.a 00:04:07.218 LIB libspdk_keyring_linux.a 00:04:07.218 SO libspdk_accel_ioat.so.6.0 00:04:07.218 SO libspdk_blob_bdev.so.11.0 00:04:07.218 LIB libspdk_accel_error.a 00:04:07.218 SO libspdk_keyring_linux.so.1.0 00:04:07.218 SO libspdk_keyring_file.so.1.0 00:04:07.477 SO libspdk_accel_error.so.2.0 00:04:07.477 SYMLINK libspdk_accel_ioat.so 00:04:07.477 SYMLINK libspdk_keyring_file.so 00:04:07.477 SYMLINK libspdk_blob_bdev.so 00:04:07.477 CC module/vfu_device/vfu_virtio_blk.o 00:04:07.477 CC module/vfu_device/vfu_virtio_scsi.o 00:04:07.477 CC module/vfu_device/vfu_virtio_rpc.o 00:04:07.477 LIB libspdk_accel_dsa.a 00:04:07.477 SYMLINK libspdk_keyring_linux.so 00:04:07.477 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:07.477 SO libspdk_accel_dsa.so.5.0 00:04:07.477 SYMLINK libspdk_accel_error.so 00:04:07.477 CC module/scheduler/gscheduler/gscheduler.o 00:04:07.477 SYMLINK libspdk_accel_dsa.so 00:04:07.733 LIB libspdk_scheduler_dpdk_governor.a 00:04:07.733 CC module/accel/iaa/accel_iaa.o 00:04:07.733 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:07.733 LIB libspdk_scheduler_gscheduler.a 00:04:07.733 CC module/accel/iaa/accel_iaa_rpc.o 00:04:07.733 SO libspdk_scheduler_gscheduler.so.4.0 00:04:07.733 CC module/bdev/delay/vbdev_delay.o 00:04:07.733 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:07.733 SYMLINK libspdk_scheduler_gscheduler.so 00:04:07.990 LIB libspdk_vfu_device.a 00:04:07.990 CC module/bdev/error/vbdev_error.o 00:04:07.990 CC module/blobfs/bdev/blobfs_bdev.o 00:04:07.990 LIB libspdk_sock_posix.a 00:04:07.990 SO libspdk_vfu_device.so.3.0 00:04:07.990 CC module/bdev/error/vbdev_error_rpc.o 00:04:07.990 CC module/bdev/gpt/gpt.o 00:04:07.990 CC module/bdev/lvol/vbdev_lvol.o 00:04:07.990 SO libspdk_sock_posix.so.6.0 00:04:07.990 LIB libspdk_accel_iaa.a 00:04:07.990 CC module/bdev/malloc/bdev_malloc.o 00:04:07.990 SYMLINK libspdk_vfu_device.so 00:04:07.990 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:07.990 SO libspdk_accel_iaa.so.3.0 00:04:07.990 SYMLINK libspdk_sock_posix.so 00:04:08.247 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:08.247 CC module/bdev/gpt/vbdev_gpt.o 00:04:08.247 SYMLINK libspdk_accel_iaa.so 00:04:08.247 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:08.247 LIB libspdk_bdev_error.a 00:04:08.247 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:08.247 SO libspdk_bdev_error.so.6.0 00:04:08.247 LIB libspdk_blobfs_bdev.a 00:04:08.247 CC module/bdev/null/bdev_null.o 00:04:08.247 SO libspdk_blobfs_bdev.so.6.0 00:04:08.504 SYMLINK libspdk_bdev_error.so 00:04:08.504 LIB libspdk_bdev_delay.a 00:04:08.504 CC module/bdev/nvme/bdev_nvme.o 00:04:08.504 SYMLINK libspdk_blobfs_bdev.so 00:04:08.504 CC module/bdev/null/bdev_null_rpc.o 00:04:08.504 SO libspdk_bdev_delay.so.6.0 00:04:08.504 CC module/bdev/passthru/vbdev_passthru.o 00:04:08.504 LIB libspdk_bdev_gpt.a 00:04:08.504 LIB libspdk_bdev_malloc.a 00:04:08.504 SO libspdk_bdev_gpt.so.6.0 00:04:08.504 SO libspdk_bdev_malloc.so.6.0 00:04:08.504 SYMLINK libspdk_bdev_delay.so 00:04:08.504 CC module/bdev/raid/bdev_raid.o 00:04:08.504 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:08.504 SYMLINK libspdk_bdev_gpt.so 00:04:08.504 CC module/bdev/nvme/nvme_rpc.o 00:04:08.761 SYMLINK libspdk_bdev_malloc.so 00:04:08.761 CC module/bdev/nvme/bdev_mdns_client.o 00:04:08.761 CC module/bdev/nvme/vbdev_opal.o 00:04:08.761 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:08.761 LIB libspdk_bdev_lvol.a 00:04:08.761 LIB libspdk_bdev_null.a 00:04:08.761 SO libspdk_bdev_lvol.so.6.0 00:04:08.761 SO libspdk_bdev_null.so.6.0 00:04:08.761 SYMLINK libspdk_bdev_lvol.so 00:04:08.761 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:08.761 SYMLINK libspdk_bdev_null.so 00:04:09.019 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:09.019 CC module/bdev/raid/bdev_raid_rpc.o 00:04:09.019 CC module/bdev/raid/bdev_raid_sb.o 00:04:09.019 CC module/bdev/raid/raid0.o 00:04:09.019 CC module/bdev/split/vbdev_split.o 00:04:09.019 LIB libspdk_bdev_passthru.a 00:04:09.276 SO libspdk_bdev_passthru.so.6.0 00:04:09.276 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:09.276 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:09.276 SYMLINK libspdk_bdev_passthru.so 00:04:09.276 CC module/bdev/aio/bdev_aio.o 00:04:09.276 CC module/bdev/raid/raid1.o 00:04:09.276 CC module/bdev/split/vbdev_split_rpc.o 00:04:09.276 CC module/bdev/raid/concat.o 00:04:09.533 CC module/bdev/ftl/bdev_ftl.o 00:04:09.533 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:09.533 LIB libspdk_bdev_split.a 00:04:09.533 SO libspdk_bdev_split.so.6.0 00:04:09.533 CC module/bdev/iscsi/bdev_iscsi.o 00:04:09.533 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:09.802 SYMLINK libspdk_bdev_split.so 00:04:09.802 CC module/bdev/aio/bdev_aio_rpc.o 00:04:09.802 LIB libspdk_bdev_zone_block.a 00:04:09.802 SO libspdk_bdev_zone_block.so.6.0 00:04:09.802 SYMLINK libspdk_bdev_zone_block.so 00:04:09.802 LIB libspdk_bdev_ftl.a 00:04:09.802 SO libspdk_bdev_ftl.so.6.0 00:04:09.802 LIB libspdk_bdev_aio.a 00:04:09.802 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:09.802 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:09.802 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:09.802 SO libspdk_bdev_aio.so.6.0 00:04:10.066 SYMLINK libspdk_bdev_ftl.so 00:04:10.066 SYMLINK libspdk_bdev_aio.so 00:04:10.066 LIB libspdk_bdev_raid.a 00:04:10.066 LIB libspdk_bdev_iscsi.a 00:04:10.066 SO libspdk_bdev_raid.so.6.0 00:04:10.066 SO libspdk_bdev_iscsi.so.6.0 00:04:10.066 SYMLINK libspdk_bdev_iscsi.so 00:04:10.324 SYMLINK libspdk_bdev_raid.so 00:04:10.581 LIB libspdk_bdev_virtio.a 00:04:10.581 SO libspdk_bdev_virtio.so.6.0 00:04:10.581 SYMLINK libspdk_bdev_virtio.so 00:04:11.515 LIB libspdk_bdev_nvme.a 00:04:11.773 SO libspdk_bdev_nvme.so.7.0 00:04:11.773 SYMLINK libspdk_bdev_nvme.so 00:04:12.338 CC module/event/subsystems/sock/sock.o 00:04:12.338 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:12.338 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:12.338 CC module/event/subsystems/iobuf/iobuf.o 00:04:12.338 CC module/event/subsystems/vmd/vmd.o 00:04:12.338 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:12.338 CC module/event/subsystems/keyring/keyring.o 00:04:12.338 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:04:12.338 CC module/event/subsystems/scheduler/scheduler.o 00:04:12.596 LIB libspdk_event_sock.a 00:04:12.596 LIB libspdk_event_vhost_blk.a 00:04:12.596 LIB libspdk_event_keyring.a 00:04:12.596 SO libspdk_event_sock.so.5.0 00:04:12.596 SO libspdk_event_vhost_blk.so.3.0 00:04:12.596 LIB libspdk_event_scheduler.a 00:04:12.596 LIB libspdk_event_vfu_tgt.a 00:04:12.596 SO libspdk_event_keyring.so.1.0 00:04:12.596 LIB libspdk_event_vmd.a 00:04:12.596 SO libspdk_event_vfu_tgt.so.3.0 00:04:12.596 SO libspdk_event_scheduler.so.4.0 00:04:12.596 LIB libspdk_event_iobuf.a 00:04:12.596 SYMLINK libspdk_event_vhost_blk.so 00:04:12.596 SYMLINK libspdk_event_sock.so 00:04:12.596 SO libspdk_event_vmd.so.6.0 00:04:12.596 SO libspdk_event_iobuf.so.3.0 00:04:12.596 SYMLINK libspdk_event_keyring.so 00:04:12.596 SYMLINK libspdk_event_vfu_tgt.so 00:04:12.596 SYMLINK libspdk_event_scheduler.so 00:04:12.596 SYMLINK libspdk_event_vmd.so 00:04:12.878 SYMLINK libspdk_event_iobuf.so 00:04:13.158 CC module/event/subsystems/accel/accel.o 00:04:13.158 LIB libspdk_event_accel.a 00:04:13.158 SO libspdk_event_accel.so.6.0 00:04:13.158 SYMLINK libspdk_event_accel.so 00:04:13.724 CC module/event/subsystems/bdev/bdev.o 00:04:13.724 LIB libspdk_event_bdev.a 00:04:13.724 SO libspdk_event_bdev.so.6.0 00:04:13.982 SYMLINK libspdk_event_bdev.so 00:04:14.241 CC module/event/subsystems/ublk/ublk.o 00:04:14.241 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:14.241 CC module/event/subsystems/nbd/nbd.o 00:04:14.241 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:14.241 CC module/event/subsystems/scsi/scsi.o 00:04:14.241 LIB libspdk_event_nbd.a 00:04:14.241 LIB libspdk_event_ublk.a 00:04:14.241 LIB libspdk_event_scsi.a 00:04:14.500 SO libspdk_event_nbd.so.6.0 00:04:14.500 SO libspdk_event_ublk.so.3.0 00:04:14.500 SO libspdk_event_scsi.so.6.0 00:04:14.500 SYMLINK libspdk_event_nbd.so 00:04:14.500 LIB libspdk_event_nvmf.a 00:04:14.500 SYMLINK libspdk_event_ublk.so 00:04:14.500 SYMLINK libspdk_event_scsi.so 00:04:14.500 SO libspdk_event_nvmf.so.6.0 00:04:14.500 SYMLINK libspdk_event_nvmf.so 00:04:14.757 CC module/event/subsystems/iscsi/iscsi.o 00:04:14.757 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:15.015 LIB libspdk_event_vhost_scsi.a 00:04:15.015 LIB libspdk_event_iscsi.a 00:04:15.015 SO libspdk_event_vhost_scsi.so.3.0 00:04:15.015 SO libspdk_event_iscsi.so.6.0 00:04:15.015 SYMLINK libspdk_event_vhost_scsi.so 00:04:15.015 SYMLINK libspdk_event_iscsi.so 00:04:15.273 SO libspdk.so.6.0 00:04:15.273 SYMLINK libspdk.so 00:04:15.531 TEST_HEADER include/spdk/accel.h 00:04:15.531 CXX app/trace/trace.o 00:04:15.531 TEST_HEADER include/spdk/accel_module.h 00:04:15.531 TEST_HEADER include/spdk/assert.h 00:04:15.531 TEST_HEADER include/spdk/barrier.h 00:04:15.531 TEST_HEADER include/spdk/base64.h 00:04:15.531 TEST_HEADER include/spdk/bdev.h 00:04:15.531 TEST_HEADER include/spdk/bdev_module.h 00:04:15.531 TEST_HEADER include/spdk/bdev_zone.h 00:04:15.531 TEST_HEADER include/spdk/bit_array.h 00:04:15.531 TEST_HEADER include/spdk/bit_pool.h 00:04:15.531 TEST_HEADER include/spdk/blob_bdev.h 00:04:15.531 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:15.531 TEST_HEADER include/spdk/blobfs.h 00:04:15.531 TEST_HEADER include/spdk/blob.h 00:04:15.531 TEST_HEADER include/spdk/conf.h 00:04:15.531 TEST_HEADER include/spdk/config.h 00:04:15.531 TEST_HEADER include/spdk/cpuset.h 00:04:15.531 TEST_HEADER include/spdk/crc16.h 00:04:15.531 TEST_HEADER include/spdk/crc32.h 00:04:15.531 TEST_HEADER include/spdk/crc64.h 00:04:15.531 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:15.531 TEST_HEADER include/spdk/dif.h 00:04:15.531 TEST_HEADER include/spdk/dma.h 00:04:15.531 TEST_HEADER include/spdk/endian.h 00:04:15.531 TEST_HEADER include/spdk/env_dpdk.h 00:04:15.531 TEST_HEADER include/spdk/env.h 00:04:15.531 TEST_HEADER include/spdk/event.h 00:04:15.531 TEST_HEADER include/spdk/fd_group.h 00:04:15.531 TEST_HEADER include/spdk/fd.h 00:04:15.531 TEST_HEADER include/spdk/file.h 00:04:15.531 TEST_HEADER include/spdk/ftl.h 00:04:15.531 TEST_HEADER include/spdk/gpt_spec.h 00:04:15.531 TEST_HEADER include/spdk/hexlify.h 00:04:15.531 TEST_HEADER include/spdk/histogram_data.h 00:04:15.531 CC examples/ioat/perf/perf.o 00:04:15.531 TEST_HEADER include/spdk/idxd.h 00:04:15.531 TEST_HEADER include/spdk/idxd_spec.h 00:04:15.531 CC test/thread/poller_perf/poller_perf.o 00:04:15.531 TEST_HEADER include/spdk/init.h 00:04:15.531 TEST_HEADER include/spdk/ioat.h 00:04:15.531 TEST_HEADER include/spdk/ioat_spec.h 00:04:15.531 TEST_HEADER include/spdk/iscsi_spec.h 00:04:15.531 TEST_HEADER include/spdk/json.h 00:04:15.531 CC examples/util/zipf/zipf.o 00:04:15.531 TEST_HEADER include/spdk/jsonrpc.h 00:04:15.531 TEST_HEADER include/spdk/keyring.h 00:04:15.531 TEST_HEADER include/spdk/keyring_module.h 00:04:15.531 TEST_HEADER include/spdk/likely.h 00:04:15.789 TEST_HEADER include/spdk/log.h 00:04:15.789 TEST_HEADER include/spdk/lvol.h 00:04:15.789 TEST_HEADER include/spdk/memory.h 00:04:15.789 CC test/dma/test_dma/test_dma.o 00:04:15.789 TEST_HEADER include/spdk/mmio.h 00:04:15.789 TEST_HEADER include/spdk/nbd.h 00:04:15.789 TEST_HEADER include/spdk/net.h 00:04:15.789 TEST_HEADER include/spdk/notify.h 00:04:15.789 TEST_HEADER include/spdk/nvme.h 00:04:15.789 TEST_HEADER include/spdk/nvme_intel.h 00:04:15.789 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:15.789 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:15.789 CC test/app/bdev_svc/bdev_svc.o 00:04:15.789 TEST_HEADER include/spdk/nvme_spec.h 00:04:15.789 TEST_HEADER include/spdk/nvme_zns.h 00:04:15.789 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:15.789 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:15.789 TEST_HEADER include/spdk/nvmf.h 00:04:15.789 TEST_HEADER include/spdk/nvmf_spec.h 00:04:15.789 TEST_HEADER include/spdk/nvmf_transport.h 00:04:15.789 TEST_HEADER include/spdk/opal.h 00:04:15.789 TEST_HEADER include/spdk/opal_spec.h 00:04:15.789 TEST_HEADER include/spdk/pci_ids.h 00:04:15.789 TEST_HEADER include/spdk/pipe.h 00:04:15.789 TEST_HEADER include/spdk/queue.h 00:04:15.789 TEST_HEADER include/spdk/reduce.h 00:04:15.789 TEST_HEADER include/spdk/rpc.h 00:04:15.789 TEST_HEADER include/spdk/scheduler.h 00:04:15.789 TEST_HEADER include/spdk/scsi.h 00:04:15.789 TEST_HEADER include/spdk/scsi_spec.h 00:04:15.790 TEST_HEADER include/spdk/sock.h 00:04:15.790 TEST_HEADER include/spdk/stdinc.h 00:04:15.790 TEST_HEADER include/spdk/string.h 00:04:15.790 CC test/env/mem_callbacks/mem_callbacks.o 00:04:15.790 TEST_HEADER include/spdk/thread.h 00:04:15.790 TEST_HEADER include/spdk/trace.h 00:04:15.790 TEST_HEADER include/spdk/trace_parser.h 00:04:15.790 TEST_HEADER include/spdk/tree.h 00:04:15.790 TEST_HEADER include/spdk/ublk.h 00:04:15.790 TEST_HEADER include/spdk/util.h 00:04:15.790 TEST_HEADER include/spdk/uuid.h 00:04:15.790 TEST_HEADER include/spdk/version.h 00:04:15.790 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:15.790 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:15.790 TEST_HEADER include/spdk/vhost.h 00:04:15.790 TEST_HEADER include/spdk/vmd.h 00:04:15.790 TEST_HEADER include/spdk/xor.h 00:04:15.790 TEST_HEADER include/spdk/zipf.h 00:04:15.790 CXX test/cpp_headers/accel.o 00:04:15.790 LINK interrupt_tgt 00:04:15.790 LINK poller_perf 00:04:15.790 LINK zipf 00:04:16.047 LINK ioat_perf 00:04:16.047 LINK bdev_svc 00:04:16.047 CXX test/cpp_headers/accel_module.o 00:04:16.047 LINK spdk_trace 00:04:16.047 CC test/env/vtophys/vtophys.o 00:04:16.047 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:16.047 CC test/env/memory/memory_ut.o 00:04:16.305 CXX test/cpp_headers/assert.o 00:04:16.305 CC examples/ioat/verify/verify.o 00:04:16.305 LINK test_dma 00:04:16.305 LINK vtophys 00:04:16.305 LINK env_dpdk_post_init 00:04:16.305 CXX test/cpp_headers/barrier.o 00:04:16.305 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:16.305 CC app/trace_record/trace_record.o 00:04:16.305 LINK mem_callbacks 00:04:16.564 LINK verify 00:04:16.564 CXX test/cpp_headers/base64.o 00:04:16.564 CC test/app/histogram_perf/histogram_perf.o 00:04:16.564 CC test/app/jsoncat/jsoncat.o 00:04:16.564 CC app/nvmf_tgt/nvmf_main.o 00:04:16.822 CC test/rpc_client/rpc_client_test.o 00:04:16.823 LINK spdk_trace_record 00:04:16.823 LINK histogram_perf 00:04:16.823 LINK jsoncat 00:04:16.823 CXX test/cpp_headers/bdev.o 00:04:17.080 LINK nvmf_tgt 00:04:17.080 LINK rpc_client_test 00:04:17.080 CC test/accel/dif/dif.o 00:04:17.080 LINK nvme_fuzz 00:04:17.080 CC test/app/stub/stub.o 00:04:17.080 CXX test/cpp_headers/bdev_module.o 00:04:17.337 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:17.337 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:17.337 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:17.337 CC test/env/pci/pci_ut.o 00:04:17.337 LINK stub 00:04:17.337 CXX test/cpp_headers/bdev_zone.o 00:04:17.337 CC app/iscsi_tgt/iscsi_tgt.o 00:04:17.594 CC app/spdk_tgt/spdk_tgt.o 00:04:17.594 LINK dif 00:04:17.594 LINK memory_ut 00:04:17.594 CXX test/cpp_headers/bit_array.o 00:04:17.594 LINK iscsi_tgt 00:04:17.852 CC app/spdk_lspci/spdk_lspci.o 00:04:17.852 LINK vhost_fuzz 00:04:17.852 CXX test/cpp_headers/bit_pool.o 00:04:17.852 LINK spdk_tgt 00:04:17.852 LINK pci_ut 00:04:17.852 LINK spdk_lspci 00:04:18.109 CXX test/cpp_headers/blob_bdev.o 00:04:18.110 CXX test/cpp_headers/blobfs_bdev.o 00:04:18.110 CC app/spdk_nvme_perf/perf.o 00:04:18.110 CC app/spdk_nvme_identify/identify.o 00:04:18.110 CC test/blobfs/mkfs/mkfs.o 00:04:18.110 CXX test/cpp_headers/blobfs.o 00:04:18.367 CXX test/cpp_headers/blob.o 00:04:18.367 LINK mkfs 00:04:18.367 CC examples/thread/thread/thread_ex.o 00:04:18.625 CC examples/sock/hello_world/hello_sock.o 00:04:18.625 CC test/event/event_perf/event_perf.o 00:04:18.625 CXX test/cpp_headers/conf.o 00:04:18.625 CXX test/cpp_headers/config.o 00:04:18.882 LINK thread 00:04:18.882 CXX test/cpp_headers/cpuset.o 00:04:18.882 LINK event_perf 00:04:18.882 CC examples/vmd/lsvmd/lsvmd.o 00:04:18.882 LINK hello_sock 00:04:19.215 CXX test/cpp_headers/crc16.o 00:04:19.215 LINK lsvmd 00:04:19.215 CC test/lvol/esnap/esnap.o 00:04:19.215 CC test/event/reactor/reactor.o 00:04:19.215 CXX test/cpp_headers/crc32.o 00:04:19.215 LINK spdk_nvme_perf 00:04:19.215 CC test/event/reactor_perf/reactor_perf.o 00:04:19.215 CC test/event/app_repeat/app_repeat.o 00:04:19.215 LINK reactor 00:04:19.473 CC examples/vmd/led/led.o 00:04:19.474 LINK spdk_nvme_identify 00:04:19.474 CXX test/cpp_headers/crc64.o 00:04:19.474 LINK reactor_perf 00:04:19.474 LINK app_repeat 00:04:19.474 CC app/spdk_nvme_discover/discovery_aer.o 00:04:19.474 LINK led 00:04:19.731 CXX test/cpp_headers/dif.o 00:04:19.731 LINK iscsi_fuzz 00:04:19.731 CC test/event/scheduler/scheduler.o 00:04:19.989 CC test/nvme/aer/aer.o 00:04:19.989 LINK spdk_nvme_discover 00:04:19.989 CXX test/cpp_headers/dma.o 00:04:19.989 CC test/bdev/bdevio/bdevio.o 00:04:19.989 CC examples/idxd/perf/perf.o 00:04:20.246 CC examples/accel/perf/accel_perf.o 00:04:20.246 CXX test/cpp_headers/endian.o 00:04:20.246 LINK scheduler 00:04:20.504 CC app/spdk_top/spdk_top.o 00:04:20.504 LINK aer 00:04:20.504 CXX test/cpp_headers/env_dpdk.o 00:04:20.504 CC examples/blob/hello_world/hello_blob.o 00:04:20.762 LINK idxd_perf 00:04:20.762 CXX test/cpp_headers/env.o 00:04:20.762 LINK bdevio 00:04:21.020 CXX test/cpp_headers/event.o 00:04:21.020 CXX test/cpp_headers/fd_group.o 00:04:21.020 LINK hello_blob 00:04:21.020 LINK accel_perf 00:04:21.020 CXX test/cpp_headers/fd.o 00:04:21.020 CC test/nvme/reset/reset.o 00:04:21.020 CXX test/cpp_headers/file.o 00:04:21.277 CXX test/cpp_headers/ftl.o 00:04:21.278 CXX test/cpp_headers/gpt_spec.o 00:04:21.278 CC test/nvme/sgl/sgl.o 00:04:21.278 CC test/nvme/e2edp/nvme_dp.o 00:04:21.534 CC test/nvme/overhead/overhead.o 00:04:21.534 CXX test/cpp_headers/hexlify.o 00:04:21.534 CC examples/blob/cli/blobcli.o 00:04:21.534 LINK reset 00:04:21.792 CXX test/cpp_headers/histogram_data.o 00:04:21.792 LINK sgl 00:04:21.792 CXX test/cpp_headers/idxd.o 00:04:21.792 CC examples/nvme/hello_world/hello_world.o 00:04:21.792 LINK overhead 00:04:21.792 LINK nvme_dp 00:04:22.050 LINK spdk_top 00:04:22.050 CC test/nvme/err_injection/err_injection.o 00:04:22.050 CXX test/cpp_headers/idxd_spec.o 00:04:22.050 CC test/nvme/startup/startup.o 00:04:22.050 CXX test/cpp_headers/init.o 00:04:22.050 LINK hello_world 00:04:22.050 CC test/nvme/reserve/reserve.o 00:04:22.308 LINK blobcli 00:04:22.308 LINK startup 00:04:22.308 CXX test/cpp_headers/ioat.o 00:04:22.308 CC app/vhost/vhost.o 00:04:22.308 LINK err_injection 00:04:22.308 LINK reserve 00:04:22.308 CC app/spdk_dd/spdk_dd.o 00:04:22.308 CC examples/nvme/reconnect/reconnect.o 00:04:22.566 CXX test/cpp_headers/ioat_spec.o 00:04:22.566 CC test/nvme/simple_copy/simple_copy.o 00:04:22.566 CXX test/cpp_headers/iscsi_spec.o 00:04:22.566 CXX test/cpp_headers/json.o 00:04:22.566 LINK vhost 00:04:22.825 CXX test/cpp_headers/jsonrpc.o 00:04:22.825 CC test/nvme/connect_stress/connect_stress.o 00:04:22.825 LINK simple_copy 00:04:22.825 CC test/nvme/boot_partition/boot_partition.o 00:04:22.825 CC test/nvme/compliance/nvme_compliance.o 00:04:22.825 LINK spdk_dd 00:04:22.825 LINK reconnect 00:04:23.082 CXX test/cpp_headers/keyring.o 00:04:23.082 LINK connect_stress 00:04:23.082 LINK boot_partition 00:04:23.082 CXX test/cpp_headers/keyring_module.o 00:04:23.340 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:23.340 CC test/nvme/fused_ordering/fused_ordering.o 00:04:23.340 CC examples/bdev/hello_world/hello_bdev.o 00:04:23.340 LINK nvme_compliance 00:04:23.340 CXX test/cpp_headers/likely.o 00:04:23.340 CC app/fio/nvme/fio_plugin.o 00:04:23.340 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:23.597 LINK fused_ordering 00:04:23.597 CC app/fio/bdev/fio_plugin.o 00:04:23.597 CXX test/cpp_headers/log.o 00:04:23.597 CC test/nvme/fdp/fdp.o 00:04:23.597 LINK hello_bdev 00:04:23.597 CC examples/nvme/arbitration/arbitration.o 00:04:23.597 LINK doorbell_aers 00:04:23.856 CXX test/cpp_headers/lvol.o 00:04:23.856 CC test/nvme/cuse/cuse.o 00:04:23.856 LINK nvme_manage 00:04:23.856 CXX test/cpp_headers/memory.o 00:04:24.114 CC examples/nvme/hotplug/hotplug.o 00:04:24.114 LINK fdp 00:04:24.114 CC examples/bdev/bdevperf/bdevperf.o 00:04:24.114 LINK arbitration 00:04:24.114 CXX test/cpp_headers/mmio.o 00:04:24.114 LINK spdk_bdev 00:04:24.114 LINK spdk_nvme 00:04:24.114 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:24.374 CXX test/cpp_headers/nbd.o 00:04:24.374 LINK hotplug 00:04:24.374 CC examples/nvme/abort/abort.o 00:04:24.374 CXX test/cpp_headers/net.o 00:04:24.374 CXX test/cpp_headers/notify.o 00:04:24.374 CXX test/cpp_headers/nvme.o 00:04:24.374 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:24.374 LINK cmb_copy 00:04:24.374 CXX test/cpp_headers/nvme_intel.o 00:04:24.374 CXX test/cpp_headers/nvme_ocssd.o 00:04:24.374 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:24.632 CXX test/cpp_headers/nvme_spec.o 00:04:24.632 LINK pmr_persistence 00:04:24.632 CXX test/cpp_headers/nvme_zns.o 00:04:24.632 CXX test/cpp_headers/nvmf_cmd.o 00:04:24.632 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:24.632 CXX test/cpp_headers/nvmf.o 00:04:24.632 CXX test/cpp_headers/nvmf_spec.o 00:04:24.632 LINK abort 00:04:24.889 CXX test/cpp_headers/nvmf_transport.o 00:04:24.889 CXX test/cpp_headers/opal.o 00:04:24.889 CXX test/cpp_headers/opal_spec.o 00:04:24.889 CXX test/cpp_headers/pci_ids.o 00:04:24.889 CXX test/cpp_headers/pipe.o 00:04:24.889 CXX test/cpp_headers/queue.o 00:04:24.889 CXX test/cpp_headers/reduce.o 00:04:24.889 CXX test/cpp_headers/rpc.o 00:04:25.147 CXX test/cpp_headers/scheduler.o 00:04:25.147 CXX test/cpp_headers/scsi.o 00:04:25.147 LINK bdevperf 00:04:25.147 CXX test/cpp_headers/scsi_spec.o 00:04:25.147 CXX test/cpp_headers/sock.o 00:04:25.147 CXX test/cpp_headers/stdinc.o 00:04:25.147 CXX test/cpp_headers/string.o 00:04:25.147 CXX test/cpp_headers/thread.o 00:04:25.147 CXX test/cpp_headers/trace.o 00:04:25.147 CXX test/cpp_headers/trace_parser.o 00:04:25.147 CXX test/cpp_headers/tree.o 00:04:25.406 CXX test/cpp_headers/ublk.o 00:04:25.406 CXX test/cpp_headers/util.o 00:04:25.406 CXX test/cpp_headers/uuid.o 00:04:25.406 CXX test/cpp_headers/version.o 00:04:25.406 CXX test/cpp_headers/vfio_user_pci.o 00:04:25.406 CXX test/cpp_headers/vfio_user_spec.o 00:04:25.406 CXX test/cpp_headers/vhost.o 00:04:25.406 CXX test/cpp_headers/vmd.o 00:04:25.406 LINK cuse 00:04:25.406 CXX test/cpp_headers/xor.o 00:04:25.406 CXX test/cpp_headers/zipf.o 00:04:25.406 CC examples/nvmf/nvmf/nvmf.o 00:04:25.972 LINK nvmf 00:04:26.908 LINK esnap 00:04:27.475 00:04:27.475 real 1m28.433s 00:04:27.475 user 8m39.872s 00:04:27.475 sys 1m59.807s 00:04:27.475 18:11:39 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:04:27.475 ************************************ 00:04:27.475 END TEST make 00:04:27.475 18:11:39 make -- common/autotest_common.sh@10 -- $ set +x 00:04:27.475 ************************************ 00:04:27.475 18:11:39 -- common/autotest_common.sh@1142 -- $ return 0 00:04:27.475 18:11:39 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:27.475 18:11:39 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:27.475 18:11:39 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:27.475 18:11:39 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:27.475 18:11:39 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:27.475 18:11:39 -- pm/common@44 -- $ pid=5201 00:04:27.475 18:11:39 -- pm/common@50 -- $ kill -TERM 5201 00:04:27.475 18:11:39 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:27.475 18:11:39 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:27.475 18:11:39 -- pm/common@44 -- $ pid=5203 00:04:27.475 18:11:39 -- pm/common@50 -- $ kill -TERM 5203 00:04:27.733 18:11:39 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:27.733 18:11:39 -- nvmf/common.sh@7 -- # uname -s 00:04:27.733 18:11:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:27.733 18:11:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:27.733 18:11:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:27.733 18:11:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:27.733 18:11:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:27.733 18:11:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:27.733 18:11:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:27.733 18:11:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:27.733 18:11:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:27.733 18:11:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:27.734 18:11:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:04:27.734 18:11:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=0b8484e2-e129-4a11-8748-0b3c728771da 00:04:27.734 18:11:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:27.734 18:11:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:27.734 18:11:39 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:04:27.734 18:11:39 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:27.734 18:11:39 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:27.734 18:11:39 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:27.734 18:11:39 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:27.734 18:11:39 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:27.734 18:11:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:27.734 18:11:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:27.734 18:11:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:27.734 18:11:39 -- paths/export.sh@5 -- # export PATH 00:04:27.734 18:11:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:27.734 18:11:39 -- nvmf/common.sh@47 -- # : 0 00:04:27.734 18:11:39 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:27.734 18:11:39 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:27.734 18:11:39 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:27.734 18:11:39 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:27.734 18:11:39 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:27.734 18:11:39 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:27.734 18:11:39 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:27.734 18:11:39 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:27.734 18:11:39 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:27.734 18:11:39 -- spdk/autotest.sh@32 -- # uname -s 00:04:27.734 18:11:39 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:27.734 18:11:39 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:27.734 18:11:39 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:27.734 18:11:39 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:27.734 18:11:39 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:27.734 18:11:39 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:27.734 18:11:39 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:27.734 18:11:39 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:27.734 18:11:39 -- spdk/autotest.sh@48 -- # udevadm_pid=55335 00:04:27.734 18:11:39 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:27.734 18:11:39 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:27.734 18:11:39 -- pm/common@17 -- # local monitor 00:04:27.734 18:11:39 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:27.734 18:11:39 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:27.734 18:11:39 -- pm/common@25 -- # sleep 1 00:04:27.734 18:11:39 -- pm/common@21 -- # date +%s 00:04:27.734 18:11:39 -- pm/common@21 -- # date +%s 00:04:27.734 18:11:39 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721671899 00:04:27.734 18:11:39 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721671899 00:04:27.734 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721671899_collect-vmstat.pm.log 00:04:27.734 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721671899_collect-cpu-load.pm.log 00:04:28.666 18:11:40 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:28.666 18:11:40 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:28.666 18:11:40 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:28.666 18:11:40 -- common/autotest_common.sh@10 -- # set +x 00:04:28.666 18:11:40 -- spdk/autotest.sh@59 -- # create_test_list 00:04:28.666 18:11:40 -- common/autotest_common.sh@746 -- # xtrace_disable 00:04:28.666 18:11:40 -- common/autotest_common.sh@10 -- # set +x 00:04:28.666 18:11:40 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:28.666 18:11:40 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:28.666 18:11:40 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:28.666 18:11:40 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:28.666 18:11:40 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:28.666 18:11:40 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:28.666 18:11:40 -- common/autotest_common.sh@1455 -- # uname 00:04:28.666 18:11:40 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:04:28.666 18:11:40 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:28.666 18:11:40 -- common/autotest_common.sh@1475 -- # uname 00:04:28.941 18:11:40 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:04:28.941 18:11:40 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:04:28.941 18:11:40 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:04:28.941 18:11:40 -- spdk/autotest.sh@72 -- # hash lcov 00:04:28.941 18:11:40 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:04:28.941 18:11:40 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:04:28.941 --rc lcov_branch_coverage=1 00:04:28.941 --rc lcov_function_coverage=1 00:04:28.941 --rc genhtml_branch_coverage=1 00:04:28.941 --rc genhtml_function_coverage=1 00:04:28.941 --rc genhtml_legend=1 00:04:28.941 --rc geninfo_all_blocks=1 00:04:28.941 ' 00:04:28.941 18:11:40 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:04:28.941 --rc lcov_branch_coverage=1 00:04:28.941 --rc lcov_function_coverage=1 00:04:28.941 --rc genhtml_branch_coverage=1 00:04:28.941 --rc genhtml_function_coverage=1 00:04:28.941 --rc genhtml_legend=1 00:04:28.941 --rc geninfo_all_blocks=1 00:04:28.941 ' 00:04:28.941 18:11:40 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:04:28.941 --rc lcov_branch_coverage=1 00:04:28.941 --rc lcov_function_coverage=1 00:04:28.941 --rc genhtml_branch_coverage=1 00:04:28.941 --rc genhtml_function_coverage=1 00:04:28.941 --rc genhtml_legend=1 00:04:28.941 --rc geninfo_all_blocks=1 00:04:28.941 --no-external' 00:04:28.941 18:11:40 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:04:28.941 --rc lcov_branch_coverage=1 00:04:28.941 --rc lcov_function_coverage=1 00:04:28.941 --rc genhtml_branch_coverage=1 00:04:28.941 --rc genhtml_function_coverage=1 00:04:28.941 --rc genhtml_legend=1 00:04:28.941 --rc geninfo_all_blocks=1 00:04:28.941 --no-external' 00:04:28.941 18:11:40 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:04:28.941 lcov: LCOV version 1.14 00:04:28.942 18:11:40 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:47.065 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:47.065 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:05:01.940 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:05:01.940 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:05:01.940 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:05:01.940 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:05:01.940 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:05:01.940 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:05:01.940 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:05:01.940 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:05:01.940 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:05:01.940 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:05:01.940 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:05:01.940 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:05:01.940 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:05:01.940 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:05:01.940 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:05:01.940 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:05:01.940 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:05:01.940 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:05:01.940 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:05:01.940 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:05:01.940 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:05:01.940 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:05:01.940 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:05:01.940 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:05:01.940 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:05:01.940 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:05:01.940 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:05:01.940 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:05:01.940 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:05:01.940 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:05:01.940 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:05:01.940 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:05:01.940 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:05:01.940 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:05:01.940 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:05:01.940 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:05:01.940 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:05:01.940 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:05:01.940 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:05:01.940 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:05:01.940 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:05:01.940 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:05:01.940 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:05:01.940 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:05:01.940 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:05:01.940 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:05:01.940 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:05:01.940 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:05:01.940 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:05:01.940 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:05:01.940 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:05:01.940 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:05:01.940 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:05:01.940 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:05:01.940 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:05:01.940 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:05:01.940 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:05:01.940 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:05:01.940 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:05:01.940 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:05:01.940 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:05:01.940 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:05:01.940 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:05:01.940 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:05:01.940 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:05:01.940 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:05:01.940 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:05:01.940 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:05:01.940 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:05:01.940 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:05:01.940 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:05:01.940 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:05:01.940 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:05:01.940 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:05:01.940 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:05:01.940 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:05:01.940 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:05:01.940 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:05:01.940 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:05:01.940 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:05:01.940 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:05:01.940 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:05:01.940 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:05:01.940 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:05:01.940 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:05:01.940 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:05:01.940 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:05:01.940 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:05:01.940 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:05:01.940 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:05:01.940 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:05:01.940 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:05:01.940 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:05:01.940 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:05:01.940 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:05:01.940 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:05:01.940 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:05:01.940 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:05:01.940 /home/vagrant/spdk_repo/spdk/test/cpp_headers/net.gcno:no functions found 00:05:01.940 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/net.gcno 00:05:01.940 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:05:01.940 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:05:01.940 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:05:01.940 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:05:01.940 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:05:01.940 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:05:01.940 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:05:01.941 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:05:01.941 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:05:01.941 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:05:01.941 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:05:01.941 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:05:01.941 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:05:01.941 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:05:01.941 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:05:01.941 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:05:01.941 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:05:01.941 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:05:01.941 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:05:01.941 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:05:01.941 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:05:01.941 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:05:01.941 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:05:01.941 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:05:01.941 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:05:01.941 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:05:01.941 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:05:01.941 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:05:01.941 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:05:01.941 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:05:01.941 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:05:01.941 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:05:01.941 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:05:01.941 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:05:01.941 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:05:01.941 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:05:01.941 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:05:01.941 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:05:01.941 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:05:01.941 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:05:01.941 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:05:01.941 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:05:01.941 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:05:01.941 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:05:01.941 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:05:01.941 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:05:01.941 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:05:01.941 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:05:01.941 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:05:01.941 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:05:01.941 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:05:01.941 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:05:01.941 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:05:01.941 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:05:01.941 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:05:01.941 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:05:01.941 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:05:01.941 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:05:01.941 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:05:01.941 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:05:01.941 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:05:01.941 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:05:01.941 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:05:01.941 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:05:01.941 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:05:01.941 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:05:01.941 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:05:01.941 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:05:01.941 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:05:01.941 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:05:01.941 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:05:01.941 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:05:01.941 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:05:01.941 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:05:01.941 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:05:01.941 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:05:01.941 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:05:01.941 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:05:04.495 18:12:15 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:05:04.496 18:12:15 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:04.496 18:12:15 -- common/autotest_common.sh@10 -- # set +x 00:05:04.496 18:12:15 -- spdk/autotest.sh@91 -- # rm -f 00:05:04.496 18:12:16 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:04.754 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:04.754 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:05:04.754 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:04.754 18:12:16 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:05:04.754 18:12:16 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:05:04.754 18:12:16 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:05:04.754 18:12:16 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:05:04.754 18:12:16 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:04.754 18:12:16 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:05:04.754 18:12:16 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:05:04.754 18:12:16 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:04.754 18:12:16 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:04.754 18:12:16 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:04.754 18:12:16 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:05:04.754 18:12:16 -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:05:04.754 18:12:16 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:04.754 18:12:16 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:04.754 18:12:16 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:04.754 18:12:16 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n2 00:05:04.754 18:12:16 -- common/autotest_common.sh@1662 -- # local device=nvme1n2 00:05:04.754 18:12:16 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:04.754 18:12:16 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:04.754 18:12:16 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:04.754 18:12:16 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n3 00:05:04.754 18:12:16 -- common/autotest_common.sh@1662 -- # local device=nvme1n3 00:05:04.754 18:12:16 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:04.754 18:12:16 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:04.754 18:12:16 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:05:04.754 18:12:16 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:04.754 18:12:16 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:04.754 18:12:16 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:05:04.754 18:12:16 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:05:04.754 18:12:16 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:05.012 No valid GPT data, bailing 00:05:05.012 18:12:16 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:05.012 18:12:16 -- scripts/common.sh@391 -- # pt= 00:05:05.012 18:12:16 -- scripts/common.sh@392 -- # return 1 00:05:05.012 18:12:16 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:05.012 1+0 records in 00:05:05.012 1+0 records out 00:05:05.012 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00491152 s, 213 MB/s 00:05:05.012 18:12:16 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:05.012 18:12:16 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:05.012 18:12:16 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:05:05.012 18:12:16 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:05:05.012 18:12:16 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:05.012 No valid GPT data, bailing 00:05:05.012 18:12:16 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:05.012 18:12:16 -- scripts/common.sh@391 -- # pt= 00:05:05.012 18:12:16 -- scripts/common.sh@392 -- # return 1 00:05:05.012 18:12:16 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:05.012 1+0 records in 00:05:05.012 1+0 records out 00:05:05.012 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00446101 s, 235 MB/s 00:05:05.012 18:12:16 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:05.012 18:12:16 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:05.012 18:12:16 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n2 00:05:05.012 18:12:16 -- scripts/common.sh@378 -- # local block=/dev/nvme1n2 pt 00:05:05.012 18:12:16 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:05:05.012 No valid GPT data, bailing 00:05:05.012 18:12:17 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:05:05.012 18:12:17 -- scripts/common.sh@391 -- # pt= 00:05:05.012 18:12:17 -- scripts/common.sh@392 -- # return 1 00:05:05.012 18:12:17 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:05:05.270 1+0 records in 00:05:05.270 1+0 records out 00:05:05.270 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0046525 s, 225 MB/s 00:05:05.270 18:12:17 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:05.270 18:12:17 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:05.270 18:12:17 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n3 00:05:05.270 18:12:17 -- scripts/common.sh@378 -- # local block=/dev/nvme1n3 pt 00:05:05.270 18:12:17 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:05:05.270 No valid GPT data, bailing 00:05:05.270 18:12:17 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:05:05.270 18:12:17 -- scripts/common.sh@391 -- # pt= 00:05:05.270 18:12:17 -- scripts/common.sh@392 -- # return 1 00:05:05.270 18:12:17 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:05:05.270 1+0 records in 00:05:05.270 1+0 records out 00:05:05.270 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00492482 s, 213 MB/s 00:05:05.270 18:12:17 -- spdk/autotest.sh@118 -- # sync 00:05:05.270 18:12:17 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:05.270 18:12:17 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:05.270 18:12:17 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:07.184 18:12:18 -- spdk/autotest.sh@124 -- # uname -s 00:05:07.184 18:12:18 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:05:07.184 18:12:18 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:05:07.184 18:12:18 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:07.184 18:12:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:07.184 18:12:18 -- common/autotest_common.sh@10 -- # set +x 00:05:07.184 ************************************ 00:05:07.184 START TEST setup.sh 00:05:07.184 ************************************ 00:05:07.184 18:12:19 setup.sh -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:05:07.184 * Looking for test storage... 00:05:07.184 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:07.184 18:12:19 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:05:07.184 18:12:19 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:05:07.184 18:12:19 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:05:07.184 18:12:19 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:07.184 18:12:19 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:07.184 18:12:19 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:07.184 ************************************ 00:05:07.184 START TEST acl 00:05:07.184 ************************************ 00:05:07.184 18:12:19 setup.sh.acl -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:05:07.184 * Looking for test storage... 00:05:07.442 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:07.442 18:12:19 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:05:07.442 18:12:19 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:05:07.442 18:12:19 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:05:07.442 18:12:19 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:05:07.442 18:12:19 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:07.442 18:12:19 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:05:07.442 18:12:19 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:05:07.442 18:12:19 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:07.442 18:12:19 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:07.442 18:12:19 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:07.442 18:12:19 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:05:07.442 18:12:19 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:05:07.442 18:12:19 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:07.442 18:12:19 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:07.442 18:12:19 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:07.442 18:12:19 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n2 00:05:07.442 18:12:19 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n2 00:05:07.442 18:12:19 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:07.442 18:12:19 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:07.442 18:12:19 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:07.442 18:12:19 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n3 00:05:07.442 18:12:19 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n3 00:05:07.442 18:12:19 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:07.442 18:12:19 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:07.442 18:12:19 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:05:07.442 18:12:19 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:05:07.442 18:12:19 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:05:07.442 18:12:19 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:05:07.442 18:12:19 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:05:07.442 18:12:19 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:07.442 18:12:19 setup.sh.acl -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:08.008 18:12:19 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:05:08.008 18:12:19 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:05:08.008 18:12:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:08.008 18:12:19 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:05:08.008 18:12:19 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:05:08.008 18:12:19 setup.sh.acl -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:08.941 18:12:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:05:08.941 18:12:20 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:08.941 18:12:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:08.941 Hugepages 00:05:08.941 node hugesize free / total 00:05:08.941 18:12:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:05:08.941 18:12:20 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:08.941 18:12:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:08.941 00:05:08.941 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:08.941 18:12:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:05:08.941 18:12:20 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:08.941 18:12:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:08.941 18:12:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:05:08.941 18:12:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:05:08.941 18:12:20 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:08.941 18:12:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:08.941 18:12:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:05:08.941 18:12:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:08.941 18:12:20 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:05:08.941 18:12:20 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:08.941 18:12:20 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:08.941 18:12:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:08.941 18:12:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:11.0 == *:*:*.* ]] 00:05:08.941 18:12:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:08.941 18:12:20 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:05:08.941 18:12:20 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:08.941 18:12:20 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:08.941 18:12:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:08.941 18:12:20 setup.sh.acl -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:05:08.941 18:12:20 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:05:08.941 18:12:20 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:08.941 18:12:20 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:08.941 18:12:20 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:08.941 ************************************ 00:05:08.941 START TEST denied 00:05:08.941 ************************************ 00:05:08.941 18:12:20 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:05:08.941 18:12:20 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:05:08.941 18:12:20 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:05:08.941 18:12:20 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:05:08.941 18:12:20 setup.sh.acl.denied -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:08.941 18:12:20 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:05:09.932 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:05:09.932 18:12:21 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:05:09.932 18:12:21 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:05:09.932 18:12:21 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:05:09.932 18:12:21 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:05:09.932 18:12:21 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:05:09.932 18:12:21 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:09.932 18:12:21 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:09.932 18:12:21 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:05:09.932 18:12:21 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:09.932 18:12:21 setup.sh.acl.denied -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:10.497 00:05:10.497 real 0m1.516s 00:05:10.497 user 0m0.583s 00:05:10.497 sys 0m0.879s 00:05:10.497 18:12:22 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:10.497 ************************************ 00:05:10.497 END TEST denied 00:05:10.497 ************************************ 00:05:10.497 18:12:22 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:05:10.497 18:12:22 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:05:10.497 18:12:22 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:05:10.497 18:12:22 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:10.497 18:12:22 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:10.497 18:12:22 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:10.497 ************************************ 00:05:10.497 START TEST allowed 00:05:10.497 ************************************ 00:05:10.497 18:12:22 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:05:10.497 18:12:22 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:05:10.497 18:12:22 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:05:10.497 18:12:22 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:05:10.497 18:12:22 setup.sh.acl.allowed -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:10.497 18:12:22 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:05:11.433 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:11.433 18:12:23 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 0000:00:11.0 00:05:11.433 18:12:23 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:05:11.433 18:12:23 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:05:11.433 18:12:23 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:11.0 ]] 00:05:11.433 18:12:23 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:11.0/driver 00:05:11.433 18:12:23 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:11.433 18:12:23 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:11.433 18:12:23 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:05:11.433 18:12:23 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:11.433 18:12:23 setup.sh.acl.allowed -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:12.051 00:05:12.051 real 0m1.551s 00:05:12.051 user 0m0.664s 00:05:12.051 sys 0m0.883s 00:05:12.051 18:12:24 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:12.051 18:12:24 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:05:12.051 ************************************ 00:05:12.051 END TEST allowed 00:05:12.051 ************************************ 00:05:12.311 18:12:24 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:05:12.311 ************************************ 00:05:12.311 END TEST acl 00:05:12.311 ************************************ 00:05:12.311 00:05:12.311 real 0m4.955s 00:05:12.311 user 0m2.115s 00:05:12.311 sys 0m2.781s 00:05:12.311 18:12:24 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:12.311 18:12:24 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:12.311 18:12:24 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:05:12.311 18:12:24 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:05:12.311 18:12:24 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:12.311 18:12:24 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:12.311 18:12:24 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:12.311 ************************************ 00:05:12.311 START TEST hugepages 00:05:12.311 ************************************ 00:05:12.311 18:12:24 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:05:12.311 * Looking for test storage... 00:05:12.311 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:12.311 18:12:24 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:05:12.311 18:12:24 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:05:12.311 18:12:24 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:05:12.311 18:12:24 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:05:12.311 18:12:24 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:05:12.311 18:12:24 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:05:12.311 18:12:24 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:05:12.311 18:12:24 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:05:12.311 18:12:24 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:05:12.311 18:12:24 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:05:12.311 18:12:24 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:12.311 18:12:24 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:12.311 18:12:24 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:12.311 18:12:24 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:05:12.311 18:12:24 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:12.311 18:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:12.311 18:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:12.311 18:12:24 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 5708212 kB' 'MemAvailable: 7422808 kB' 'Buffers: 2436 kB' 'Cached: 1925528 kB' 'SwapCached: 0 kB' 'Active: 476120 kB' 'Inactive: 1556068 kB' 'Active(anon): 114712 kB' 'Inactive(anon): 0 kB' 'Active(file): 361408 kB' 'Inactive(file): 1556068 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 316 kB' 'Writeback: 0 kB' 'AnonPages: 105604 kB' 'Mapped: 48876 kB' 'Shmem: 10488 kB' 'KReclaimable: 68116 kB' 'Slab: 141636 kB' 'SReclaimable: 68116 kB' 'SUnreclaim: 73520 kB' 'KernelStack: 6444 kB' 'PageTables: 4200 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412436 kB' 'Committed_AS: 335064 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54756 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 5085184 kB' 'DirectMap1G: 9437184 kB' 00:05:12.311 18:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:12.311 18:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:12.311 18:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:12.311 18:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:12.311 18:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:12.311 18:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:12.311 18:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:12.311 18:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:12.311 18:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:12.311 18:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:12.311 18:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:12.311 18:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:12.311 18:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:12.311 18:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:12.311 18:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:12.311 18:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:12.311 18:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:12.311 18:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:12.311 18:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:12.311 18:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:12.311 18:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:12.311 18:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:12.311 18:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:12.311 18:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:12.311 18:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:12.311 18:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:12.311 18:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:12.311 18:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:12.311 18:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:12.311 18:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:12.311 18:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:12.311 18:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:12.311 18:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:12.311 18:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:12.311 18:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:12.311 18:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:12.311 18:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:12.311 18:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:12.311 18:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:12.311 18:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:12.311 18:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:12.312 18:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:12.312 18:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:12.312 18:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:12.312 18:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:12.312 18:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:12.312 18:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:12.312 18:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:12.312 18:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:12.312 18:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:12.312 18:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:12.312 18:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:12.312 18:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:12.312 18:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:12.312 18:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:12.312 18:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:12.312 18:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:12.312 18:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:12.312 18:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:12.312 18:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:12.312 18:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:12.312 18:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:12.312 18:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:12.312 18:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:12.312 18:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:12.312 18:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:12.312 18:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:12.312 18:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:12.312 18:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:12.312 18:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:12.312 18:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:12.312 18:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:12.312 18:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:12.312 18:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:12.312 18:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:12.312 18:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:12.312 18:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:12.312 18:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:12.312 18:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:12.312 18:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:12.312 18:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:12.312 18:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:12.312 18:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:12.312 18:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:12.312 18:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:12.312 18:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:12.312 18:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:12.312 18:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:12.312 18:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:12.312 18:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:12.312 18:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:12.312 18:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:12.312 18:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:12.312 18:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:12.312 18:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:12.312 18:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:12.312 18:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:12.312 18:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:12.312 18:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:12.312 18:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:12.312 18:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:12.312 18:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:12.312 18:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:12.312 18:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:12.312 18:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:12.312 18:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:12.312 18:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:12.312 18:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:12.312 18:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:12.312 18:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:12.312 18:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:12.312 18:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:12.312 18:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:12.312 18:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:12.312 18:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:12.312 18:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:12.312 18:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:12.312 18:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:12.312 18:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:12.312 18:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:12.312 18:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:12.312 18:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:12.312 18:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:12.312 18:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:12.312 18:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:12.312 18:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:12.312 18:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:12.312 18:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:12.312 18:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:12.312 18:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:12.312 18:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:12.312 18:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:12.312 18:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:12.312 18:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:12.312 18:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:12.312 18:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:12.312 18:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:12.312 18:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:12.312 18:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:12.312 18:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:12.312 18:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:12.312 18:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:12.312 18:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:12.312 18:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:12.312 18:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:12.312 18:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:12.312 18:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:12.312 18:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:12.312 18:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:12.312 18:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:12.312 18:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:12.312 18:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:12.312 18:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:12.312 18:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:12.312 18:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:12.312 18:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:12.312 18:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:12.312 18:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:12.312 18:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:12.312 18:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:12.312 18:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:12.312 18:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:12.312 18:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:12.312 18:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:12.312 18:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:12.312 18:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:12.313 18:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:12.313 18:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:12.313 18:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:12.313 18:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:12.313 18:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:12.313 18:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:12.313 18:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:12.313 18:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:12.313 18:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:12.313 18:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:12.313 18:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:12.313 18:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:12.313 18:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:12.313 18:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:12.313 18:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:12.313 18:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:12.313 18:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:12.313 18:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:12.313 18:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:12.313 18:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:12.313 18:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:12.313 18:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:12.313 18:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:12.313 18:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:12.313 18:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:12.313 18:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:12.313 18:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:12.313 18:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:12.313 18:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:12.313 18:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:12.313 18:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:12.313 18:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:12.313 18:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:12.313 18:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:12.313 18:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:12.313 18:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:12.313 18:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:12.313 18:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:12.313 18:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:12.313 18:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:12.313 18:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:12.313 18:12:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:12.313 18:12:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:12.313 18:12:24 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:05:12.313 18:12:24 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:05:12.313 18:12:24 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:05:12.313 18:12:24 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:05:12.313 18:12:24 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:05:12.313 18:12:24 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:05:12.313 18:12:24 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:05:12.313 18:12:24 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:05:12.313 18:12:24 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:05:12.313 18:12:24 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:05:12.313 18:12:24 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:05:12.313 18:12:24 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:12.313 18:12:24 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:05:12.313 18:12:24 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:12.313 18:12:24 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:12.313 18:12:24 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:05:12.313 18:12:24 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:05:12.313 18:12:24 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:12.313 18:12:24 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:12.313 18:12:24 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:12.313 18:12:24 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:12.313 18:12:24 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:12.313 18:12:24 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:12.313 18:12:24 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:12.313 18:12:24 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:05:12.313 18:12:24 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:12.313 18:12:24 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:12.313 18:12:24 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:12.313 ************************************ 00:05:12.313 START TEST default_setup 00:05:12.313 ************************************ 00:05:12.313 18:12:24 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:05:12.313 18:12:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:05:12.313 18:12:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:05:12.313 18:12:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:12.313 18:12:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:05:12.313 18:12:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:12.313 18:12:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:05:12.313 18:12:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:12.313 18:12:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:12.313 18:12:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:12.313 18:12:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:12.313 18:12:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:05:12.313 18:12:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:12.313 18:12:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:12.313 18:12:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:12.313 18:12:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:12.313 18:12:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:12.313 18:12:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:12.313 18:12:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:12.313 18:12:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:05:12.313 18:12:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:05:12.313 18:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:05:12.313 18:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:13.252 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:13.252 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:13.252 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:13.252 18:12:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:05:13.252 18:12:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:05:13.252 18:12:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:05:13.252 18:12:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:05:13.252 18:12:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:05:13.252 18:12:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:05:13.252 18:12:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:05:13.252 18:12:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:13.252 18:12:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:13.252 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:13.252 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:13.252 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:13.252 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:13.252 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:13.252 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:13.252 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:13.252 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:13.252 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:13.252 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.252 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.252 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7802732 kB' 'MemAvailable: 9517176 kB' 'Buffers: 2436 kB' 'Cached: 1925516 kB' 'SwapCached: 0 kB' 'Active: 493356 kB' 'Inactive: 1556076 kB' 'Active(anon): 131948 kB' 'Inactive(anon): 0 kB' 'Active(file): 361408 kB' 'Inactive(file): 1556076 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 123112 kB' 'Mapped: 49008 kB' 'Shmem: 10464 kB' 'KReclaimable: 67800 kB' 'Slab: 141324 kB' 'SReclaimable: 67800 kB' 'SUnreclaim: 73524 kB' 'KernelStack: 6416 kB' 'PageTables: 4164 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 352128 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54788 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 5085184 kB' 'DirectMap1G: 9437184 kB' 00:05:13.252 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.252 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.252 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.252 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.252 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.252 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.252 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.252 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.252 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.252 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.252 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.252 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.252 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.252 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.252 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.252 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.252 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.252 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.252 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.252 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.252 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.252 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.252 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.252 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.252 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.252 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.252 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.252 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.252 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.252 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.252 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.252 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.253 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.253 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.253 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.253 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.253 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.253 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.253 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.253 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.253 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.253 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.253 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.253 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.253 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.253 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.253 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.253 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.253 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.253 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.253 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.253 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.253 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.253 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.253 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.253 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.253 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.253 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.253 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.253 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.253 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.253 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.253 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.253 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.253 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.253 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.253 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.253 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.253 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.253 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.253 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.253 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.253 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.253 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.253 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.253 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.253 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.253 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.253 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.253 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.253 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.253 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.253 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.253 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.253 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.253 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.253 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.253 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.253 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.253 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.253 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.253 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.253 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.253 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.253 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.253 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.253 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.253 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.253 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.253 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.253 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.253 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.253 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.253 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.253 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.253 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.253 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.253 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.253 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.253 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.253 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.253 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.253 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.253 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.253 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.253 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.253 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.253 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.253 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.253 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.253 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.253 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.253 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.253 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.253 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.253 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.253 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.253 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.253 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.253 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.253 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.253 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.253 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.253 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.253 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.253 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.253 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.253 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.253 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.253 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.253 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.253 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.253 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.253 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.253 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.253 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.253 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.253 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.253 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.253 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.254 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.254 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.254 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.254 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.254 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.254 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.254 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.254 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.254 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.254 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.254 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.254 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:13.254 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:13.254 18:12:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:05:13.254 18:12:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:13.254 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:13.254 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:13.254 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:13.254 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:13.254 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:13.254 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:13.254 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:13.254 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:13.254 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:13.254 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.254 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.254 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7809024 kB' 'MemAvailable: 9523472 kB' 'Buffers: 2436 kB' 'Cached: 1925516 kB' 'SwapCached: 0 kB' 'Active: 493052 kB' 'Inactive: 1556080 kB' 'Active(anon): 131644 kB' 'Inactive(anon): 0 kB' 'Active(file): 361408 kB' 'Inactive(file): 1556080 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 122796 kB' 'Mapped: 48884 kB' 'Shmem: 10464 kB' 'KReclaimable: 67800 kB' 'Slab: 141228 kB' 'SReclaimable: 67800 kB' 'SUnreclaim: 73428 kB' 'KernelStack: 6448 kB' 'PageTables: 4268 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 352128 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54772 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 5085184 kB' 'DirectMap1G: 9437184 kB' 00:05:13.254 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.254 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.254 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.254 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.254 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.254 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.254 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.254 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.254 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.254 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.254 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.254 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.254 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.254 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.254 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.254 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.254 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.254 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.254 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.254 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.254 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.254 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.254 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.254 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.254 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.254 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.254 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.254 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.254 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.254 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.254 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.254 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.254 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.254 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.254 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.254 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.254 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.254 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.254 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.254 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.254 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.254 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.254 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.254 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.254 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.254 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.254 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.254 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.254 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.254 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.254 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.254 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.254 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.254 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.254 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.254 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.254 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.254 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.254 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.254 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.254 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.254 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.254 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.254 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.254 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.254 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.254 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.254 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.254 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.254 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.254 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.254 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.254 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.254 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.254 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.254 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.254 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.255 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.255 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.255 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.255 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.255 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.255 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.255 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.255 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.255 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.255 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.255 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.255 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.255 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.255 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.255 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.255 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.255 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.255 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.255 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.255 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.255 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.255 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.255 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.255 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.255 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.255 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.255 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.255 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.255 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.255 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.255 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.255 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.255 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.255 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.255 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.255 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.255 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.255 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.255 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.255 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.255 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.255 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.255 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.255 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.255 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.255 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.255 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.255 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.255 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.255 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.255 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.255 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.255 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.255 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.255 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.255 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.255 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.255 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.255 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.255 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.255 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.255 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.255 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.255 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.255 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.255 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.255 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.255 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.255 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.255 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.255 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.255 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.255 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.255 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.255 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.255 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.255 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.255 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.255 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.255 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.255 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.255 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.255 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.255 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.255 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.255 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.255 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.255 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.255 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.255 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.255 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.255 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.255 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.255 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.255 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.255 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.255 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.255 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.255 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.255 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.255 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.255 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.255 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.255 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.255 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.255 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.255 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.255 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.255 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.255 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.255 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.255 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.255 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.255 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.255 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.255 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.256 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.256 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.256 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.256 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.256 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.256 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.256 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.256 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.256 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.256 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.256 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.256 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.256 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:13.256 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:13.256 18:12:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:05:13.256 18:12:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:13.256 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:13.256 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:13.256 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:13.256 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:13.256 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:13.256 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:13.256 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:13.256 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:13.256 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:13.256 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.256 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.256 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7808868 kB' 'MemAvailable: 9523316 kB' 'Buffers: 2436 kB' 'Cached: 1925516 kB' 'SwapCached: 0 kB' 'Active: 492944 kB' 'Inactive: 1556080 kB' 'Active(anon): 131536 kB' 'Inactive(anon): 0 kB' 'Active(file): 361408 kB' 'Inactive(file): 1556080 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 122664 kB' 'Mapped: 48824 kB' 'Shmem: 10464 kB' 'KReclaimable: 67800 kB' 'Slab: 141236 kB' 'SReclaimable: 67800 kB' 'SUnreclaim: 73436 kB' 'KernelStack: 6432 kB' 'PageTables: 4220 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 352128 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54772 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 5085184 kB' 'DirectMap1G: 9437184 kB' 00:05:13.256 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.256 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.256 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.256 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.256 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.256 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.256 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.256 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.256 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.256 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.256 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.256 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.256 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.256 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.256 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.256 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.256 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.256 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.256 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.256 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.256 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.256 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.256 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.256 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.256 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.256 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.256 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.256 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.256 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.256 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.256 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.256 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.256 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.256 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.256 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.256 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.256 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.256 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.256 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.256 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.256 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.256 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.256 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.256 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.256 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.256 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.256 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.256 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.256 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.256 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.256 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.256 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.256 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.256 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.256 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.256 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.256 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.256 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.256 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.256 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.256 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.256 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.256 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.256 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.256 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.256 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.256 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.256 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.256 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.256 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.256 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.256 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.256 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.256 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.256 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.256 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.256 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.257 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.257 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.257 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.257 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.257 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.257 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.257 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.257 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.257 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.257 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.257 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.257 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.257 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.257 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.257 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.257 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.257 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.257 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.257 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.257 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.257 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.257 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.257 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.257 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.257 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.257 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.257 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.257 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.257 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.257 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.257 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.257 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.257 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.257 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.257 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.257 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.257 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.257 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.257 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.257 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.257 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.257 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.257 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.257 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.257 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.257 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.257 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.257 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.257 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.257 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.257 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.257 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.257 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.257 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.257 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.257 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.257 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.257 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.257 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.257 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.257 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.257 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.257 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.257 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.257 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.257 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.257 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.257 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.257 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.257 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.257 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.257 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.257 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.257 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.257 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.257 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.257 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.257 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.257 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.257 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.257 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.257 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.257 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.257 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.257 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.257 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.257 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.257 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.257 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.257 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.257 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.257 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.257 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.257 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.257 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.258 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.258 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.258 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.258 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.258 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.258 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.258 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.258 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.258 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.258 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.258 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.258 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.258 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.258 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.258 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.258 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.258 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.258 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.258 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.258 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.258 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.258 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.258 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.258 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.258 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.258 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.258 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.258 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.258 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.258 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:13.258 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:13.258 18:12:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:05:13.258 18:12:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:13.258 nr_hugepages=1024 00:05:13.258 resv_hugepages=0 00:05:13.258 surplus_hugepages=0 00:05:13.258 anon_hugepages=0 00:05:13.258 18:12:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:13.258 18:12:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:13.258 18:12:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:13.258 18:12:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:13.258 18:12:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:13.258 18:12:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:13.258 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:13.258 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:13.258 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:13.258 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:13.258 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:13.258 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:13.258 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:13.258 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:13.258 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:13.258 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.258 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7808616 kB' 'MemAvailable: 9523064 kB' 'Buffers: 2436 kB' 'Cached: 1925516 kB' 'SwapCached: 0 kB' 'Active: 493160 kB' 'Inactive: 1556080 kB' 'Active(anon): 131752 kB' 'Inactive(anon): 0 kB' 'Active(file): 361408 kB' 'Inactive(file): 1556080 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 122884 kB' 'Mapped: 48824 kB' 'Shmem: 10464 kB' 'KReclaimable: 67800 kB' 'Slab: 141236 kB' 'SReclaimable: 67800 kB' 'SUnreclaim: 73436 kB' 'KernelStack: 6432 kB' 'PageTables: 4220 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 352128 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54772 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 5085184 kB' 'DirectMap1G: 9437184 kB' 00:05:13.258 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.258 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.258 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.258 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.258 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.258 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.258 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.258 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.258 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.258 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.258 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.258 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.258 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.258 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.258 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.258 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.258 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.258 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.258 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.258 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.258 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.258 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.258 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.258 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.258 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.258 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.258 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.258 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.258 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.258 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.258 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.258 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.258 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.258 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.258 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.258 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.258 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.258 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.258 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.258 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.258 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.258 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.258 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.258 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.258 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.258 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.258 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.258 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.258 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.258 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.258 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.258 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.259 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.259 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.259 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.259 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.259 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.259 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.259 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.259 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.259 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.259 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.259 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.259 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.259 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.259 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.259 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.259 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.259 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.259 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.259 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.259 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.259 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.259 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.259 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.259 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.259 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.259 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.259 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.259 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.259 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.259 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.259 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.259 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.259 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.259 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.518 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.518 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.518 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.518 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.518 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.518 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.518 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.518 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.518 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.518 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.518 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.518 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.518 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.518 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.518 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.518 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.518 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.518 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.518 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.519 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.519 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.519 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.519 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.519 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.519 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.519 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.519 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.519 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.519 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.519 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.519 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.519 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.519 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.519 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.519 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.519 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.519 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.519 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.519 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.519 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.519 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.519 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.519 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.519 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.519 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.519 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.519 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.519 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.519 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.519 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.519 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.519 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.519 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.519 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.519 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.519 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.519 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.519 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.519 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.519 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.519 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.519 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.519 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.519 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.519 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.519 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.519 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.519 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.519 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.519 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.519 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.519 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.519 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.519 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.519 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.519 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.519 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.519 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.519 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.519 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.519 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.519 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.519 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.519 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.519 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.519 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.519 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.519 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.519 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.519 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.519 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.519 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.519 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.519 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.519 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.519 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.519 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.519 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.519 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.519 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.519 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.519 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.519 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.519 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.519 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.519 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.519 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.519 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.519 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:05:13.519 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:13.519 18:12:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:13.519 18:12:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:05:13.519 18:12:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:05:13.519 18:12:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:13.519 18:12:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:13.519 18:12:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:13.519 18:12:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:13.519 18:12:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:13.519 18:12:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:13.519 18:12:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:13.519 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:13.519 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:05:13.519 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:13.519 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:13.519 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:13.519 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:13.519 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:13.519 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:13.519 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:13.519 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.519 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.520 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7807860 kB' 'MemUsed: 4434112 kB' 'SwapCached: 0 kB' 'Active: 493144 kB' 'Inactive: 1556080 kB' 'Active(anon): 131736 kB' 'Inactive(anon): 0 kB' 'Active(file): 361408 kB' 'Inactive(file): 1556080 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'FilePages: 1927952 kB' 'Mapped: 48824 kB' 'AnonPages: 122868 kB' 'Shmem: 10464 kB' 'KernelStack: 6432 kB' 'PageTables: 4220 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 67800 kB' 'Slab: 141236 kB' 'SReclaimable: 67800 kB' 'SUnreclaim: 73436 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:13.520 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.520 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.520 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.520 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.520 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.520 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.520 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.520 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.520 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.520 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.520 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.520 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.520 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.520 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.520 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.520 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.520 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.520 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.520 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.520 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.520 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.520 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.520 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.520 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.520 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.520 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.520 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.520 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.520 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.520 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.520 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.520 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.520 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.520 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.520 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.520 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.520 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.520 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.520 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.520 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.520 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.520 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.520 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.520 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.520 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.520 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.520 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.520 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.520 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.520 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.520 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.520 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.520 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.520 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.520 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.520 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.520 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.520 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.520 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.520 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.520 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.520 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.520 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.520 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.520 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.520 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.520 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.520 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.520 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.520 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.520 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.520 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.520 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.520 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.520 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.520 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.520 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.520 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.520 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.520 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.520 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.520 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.520 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.520 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.520 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.520 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.520 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.520 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.520 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.520 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.520 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.520 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.520 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.520 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.520 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.520 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.520 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.520 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.520 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.520 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.520 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.520 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.520 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.520 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.520 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.520 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.520 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.520 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.520 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.520 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.520 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.520 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.521 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.521 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.521 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.521 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.521 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.521 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.521 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.521 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.521 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.521 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.521 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.521 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.521 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.521 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.521 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.521 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.521 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.521 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.521 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.521 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.521 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.521 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.521 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.521 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.521 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.521 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.521 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.521 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.521 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.521 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:13.521 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:13.521 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:13.521 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.521 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:13.521 18:12:25 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:13.521 18:12:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:13.521 18:12:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:13.521 18:12:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:13.521 18:12:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:13.521 node0=1024 expecting 1024 00:05:13.521 ************************************ 00:05:13.521 END TEST default_setup 00:05:13.521 ************************************ 00:05:13.521 18:12:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:13.521 18:12:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:13.521 00:05:13.521 real 0m1.043s 00:05:13.521 user 0m0.500s 00:05:13.521 sys 0m0.486s 00:05:13.521 18:12:25 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:13.521 18:12:25 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:05:13.521 18:12:25 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:13.521 18:12:25 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:05:13.521 18:12:25 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:13.521 18:12:25 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:13.521 18:12:25 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:13.521 ************************************ 00:05:13.521 START TEST per_node_1G_alloc 00:05:13.521 ************************************ 00:05:13.521 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:05:13.521 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:05:13.521 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:05:13.521 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:05:13.521 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:13.521 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:05:13.521 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:13.521 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:05:13.521 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:13.521 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:13.521 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:13.521 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:13.521 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:13.521 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:13.521 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:13.521 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:13.521 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:13.521 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:13.521 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:13.521 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:05:13.521 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:05:13.521 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:05:13.521 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0 00:05:13.521 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:05:13.521 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:13.521 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:13.781 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:13.781 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:13.781 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:13.781 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:05:13.781 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:05:13.781 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:05:13.781 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:13.781 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:13.781 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:13.781 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:13.781 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:13.781 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:13.781 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:13.781 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:13.781 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:13.781 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:13.781 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:13.781 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:13.781 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:13.781 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:13.781 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:13.781 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:13.781 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.781 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.782 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8853544 kB' 'MemAvailable: 10567988 kB' 'Buffers: 2436 kB' 'Cached: 1925512 kB' 'SwapCached: 0 kB' 'Active: 493288 kB' 'Inactive: 1556076 kB' 'Active(anon): 131880 kB' 'Inactive(anon): 0 kB' 'Active(file): 361408 kB' 'Inactive(file): 1556076 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 123060 kB' 'Mapped: 49008 kB' 'Shmem: 10464 kB' 'KReclaimable: 67800 kB' 'Slab: 141216 kB' 'SReclaimable: 67800 kB' 'SUnreclaim: 73416 kB' 'KernelStack: 6484 kB' 'PageTables: 4188 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 352128 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54804 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 5085184 kB' 'DirectMap1G: 9437184 kB' 00:05:13.782 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.782 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.782 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.782 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.782 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.782 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.782 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.782 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.782 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.782 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.782 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.782 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.782 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.782 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.782 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.782 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.782 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.782 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.782 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.782 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.782 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.782 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.782 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.782 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.782 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.782 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.782 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.782 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.782 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.782 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.782 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.782 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.782 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.782 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.782 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.782 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.782 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.782 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.782 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.782 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.782 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.782 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.782 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.782 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.782 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.782 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.782 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.782 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.782 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.782 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.782 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.782 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.782 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.782 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.782 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.782 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.782 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.782 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.782 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.782 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.782 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.782 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.782 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.782 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.782 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.782 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.782 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.782 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.782 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.782 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.782 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.782 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.782 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.782 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.782 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.782 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.782 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.782 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.782 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.782 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.782 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.782 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.782 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.782 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.782 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.782 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.782 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.782 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.782 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.782 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.782 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.782 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.782 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.782 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.782 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.782 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.782 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.782 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.782 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.782 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.782 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.782 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.782 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.782 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.782 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.782 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.782 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.783 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.783 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.783 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.783 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.783 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.783 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.783 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.783 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.783 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.783 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.783 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.783 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.783 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.783 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.783 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.783 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.783 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.783 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.783 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.783 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.783 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.783 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.783 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.783 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.783 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.783 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.783 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.783 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.783 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.783 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.783 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.783 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.783 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.783 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.783 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.783 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.783 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.783 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.783 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.783 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.783 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.783 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.783 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.783 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.783 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.783 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.783 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.783 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.783 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.783 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.783 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.783 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.783 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.783 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.783 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:13.783 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:13.783 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:13.783 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:13.783 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:13.783 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:13.783 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:13.783 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:13.783 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:13.783 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:13.783 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:13.783 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:13.783 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:13.783 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.783 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.783 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8853544 kB' 'MemAvailable: 10567988 kB' 'Buffers: 2436 kB' 'Cached: 1925512 kB' 'SwapCached: 0 kB' 'Active: 493012 kB' 'Inactive: 1556076 kB' 'Active(anon): 131604 kB' 'Inactive(anon): 0 kB' 'Active(file): 361408 kB' 'Inactive(file): 1556076 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 122808 kB' 'Mapped: 48940 kB' 'Shmem: 10464 kB' 'KReclaimable: 67800 kB' 'Slab: 141220 kB' 'SReclaimable: 67800 kB' 'SUnreclaim: 73420 kB' 'KernelStack: 6444 kB' 'PageTables: 4204 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 352128 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54772 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 5085184 kB' 'DirectMap1G: 9437184 kB' 00:05:13.783 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.783 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.783 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.783 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.783 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.783 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:13.783 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.783 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.783 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.783 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.046 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.046 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.046 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.046 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.046 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.046 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.046 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.046 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.046 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.046 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.046 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.046 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.046 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.046 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.046 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.046 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.046 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.046 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.046 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.046 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.046 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.046 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.046 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.046 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.047 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.047 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.047 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.047 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.047 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.047 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.047 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.047 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.047 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.047 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.047 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.047 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.047 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.047 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.047 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.047 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.047 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.047 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.047 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.047 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.047 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.047 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.047 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.047 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.047 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.047 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.047 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.047 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.047 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.047 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.047 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.047 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.047 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.047 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.047 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.047 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.047 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.047 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.047 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.047 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.047 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.047 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.047 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.047 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.047 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.047 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.047 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.047 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.047 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.047 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.047 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.047 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.047 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.047 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.047 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.047 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.047 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.047 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.047 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.047 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.047 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.047 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.047 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.047 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.047 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.047 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.047 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.047 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.047 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.047 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.047 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.047 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.047 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.047 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.047 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.047 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.047 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.047 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.047 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.047 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.047 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.047 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.047 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.047 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.047 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.047 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.047 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.047 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.047 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.047 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.047 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.047 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.047 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.047 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.047 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.047 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.047 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.047 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.047 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.047 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.047 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.047 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.047 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.047 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.047 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.047 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.047 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.047 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.047 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.047 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.047 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.047 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.047 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.048 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.048 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.048 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.048 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.048 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.048 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.048 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.048 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.048 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.048 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.048 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.048 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.048 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.048 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.048 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.048 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.048 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.048 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.048 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.048 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.048 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.048 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.048 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.048 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.048 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.048 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.048 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.048 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.048 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.048 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.048 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.048 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.048 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.048 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.048 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.048 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.048 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.048 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.048 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.048 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.048 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.048 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.048 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.048 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.048 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.048 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.048 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.048 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.048 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.048 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.048 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.048 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.048 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.048 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.048 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.048 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.048 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.048 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.048 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:14.048 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:14.048 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:14.048 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:14.048 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:14.048 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:14.048 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:14.048 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:14.048 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:14.048 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:14.048 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:14.048 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:14.048 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:14.048 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.048 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.048 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8854196 kB' 'MemAvailable: 10568640 kB' 'Buffers: 2436 kB' 'Cached: 1925512 kB' 'SwapCached: 0 kB' 'Active: 493052 kB' 'Inactive: 1556076 kB' 'Active(anon): 131644 kB' 'Inactive(anon): 0 kB' 'Active(file): 361408 kB' 'Inactive(file): 1556076 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 122600 kB' 'Mapped: 48940 kB' 'Shmem: 10464 kB' 'KReclaimable: 67800 kB' 'Slab: 141220 kB' 'SReclaimable: 67800 kB' 'SUnreclaim: 73420 kB' 'KernelStack: 6460 kB' 'PageTables: 4252 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 352128 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54772 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 5085184 kB' 'DirectMap1G: 9437184 kB' 00:05:14.048 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.048 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.048 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.048 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.048 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.048 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.048 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.048 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.048 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.048 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.048 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.048 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.048 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.048 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.048 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.048 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.048 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.048 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.048 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.048 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.048 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.048 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.048 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.048 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.048 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.048 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.048 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.049 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.049 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.049 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.049 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.049 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.049 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.049 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.049 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.049 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.049 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.049 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.049 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.049 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.049 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.049 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.049 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.049 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.049 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.049 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.049 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.049 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.049 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.049 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.049 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.049 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.049 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.049 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.049 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.049 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.049 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.049 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.049 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.049 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.049 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.049 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.049 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.049 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.049 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.049 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.049 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.049 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.049 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.049 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.049 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.049 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.049 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.049 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.049 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.049 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.049 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.049 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.049 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.049 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.049 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.049 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.049 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.049 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.049 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.049 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.049 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.049 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.049 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.049 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.049 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.049 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.049 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.049 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.049 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.049 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.049 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.049 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.049 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.049 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.049 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.049 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.049 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.049 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.049 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.049 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.049 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.049 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.049 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.049 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.049 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.049 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.049 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.049 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.049 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.049 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.049 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.049 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.049 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.049 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.049 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.049 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.049 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.049 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.049 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.049 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.049 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.049 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.049 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.049 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.049 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.049 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.049 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.049 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.049 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.049 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.049 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.049 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.049 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.049 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.050 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.050 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.050 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.050 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.050 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.050 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.050 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.050 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.050 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.050 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.050 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.050 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.050 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.050 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.050 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.050 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.050 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.050 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.050 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.050 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.050 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.050 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.050 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.050 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.050 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.050 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.050 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.050 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.050 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.050 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.050 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.050 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.050 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.050 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.050 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.050 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.050 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.050 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.050 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.050 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.050 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.050 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.050 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.050 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.050 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.050 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.050 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.050 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.050 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.050 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.050 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.050 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.050 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.050 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.050 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.050 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.050 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.050 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.050 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.050 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.050 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.050 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:14.050 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:14.050 nr_hugepages=512 00:05:14.050 resv_hugepages=0 00:05:14.050 surplus_hugepages=0 00:05:14.050 anon_hugepages=0 00:05:14.050 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:14.050 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:14.050 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:14.050 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:14.050 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:14.050 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:14.050 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:14.050 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:14.050 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:14.050 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:14.050 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:14.050 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:14.050 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:14.050 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:14.050 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:14.050 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:14.050 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:14.050 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.050 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.050 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8854196 kB' 'MemAvailable: 10568640 kB' 'Buffers: 2436 kB' 'Cached: 1925512 kB' 'SwapCached: 0 kB' 'Active: 492968 kB' 'Inactive: 1556076 kB' 'Active(anon): 131560 kB' 'Inactive(anon): 0 kB' 'Active(file): 361408 kB' 'Inactive(file): 1556076 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 122504 kB' 'Mapped: 48940 kB' 'Shmem: 10464 kB' 'KReclaimable: 67800 kB' 'Slab: 141216 kB' 'SReclaimable: 67800 kB' 'SUnreclaim: 73416 kB' 'KernelStack: 6444 kB' 'PageTables: 4204 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 352128 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54772 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 5085184 kB' 'DirectMap1G: 9437184 kB' 00:05:14.050 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.050 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.050 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.050 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.050 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.050 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.050 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.050 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.050 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.050 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.050 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.050 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.050 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.050 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.051 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.051 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.051 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.051 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.051 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.051 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.051 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.051 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.051 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.051 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.051 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.051 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.051 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.051 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.051 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.051 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.051 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.051 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.051 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.051 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.051 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.051 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.051 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.051 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.051 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.051 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.051 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.051 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.051 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.051 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.051 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.051 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.051 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.051 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.051 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.051 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.051 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.051 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.051 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.051 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.051 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.051 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.051 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.051 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.051 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.051 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.051 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.051 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.051 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.051 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.051 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.051 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.051 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.051 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.051 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.051 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.051 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.051 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.051 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.051 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.051 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.051 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.051 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.051 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.051 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.051 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.051 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.051 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.051 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.051 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.051 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.051 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.051 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.051 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.051 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.051 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.051 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.051 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.051 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.051 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.051 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.051 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.051 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.051 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.051 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.051 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.051 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.051 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.052 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.052 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.052 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.052 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.052 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.052 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.052 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.052 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.052 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.052 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.052 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.052 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.052 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.052 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.052 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.052 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.052 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.052 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.052 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.052 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.052 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.052 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.052 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.052 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.052 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.052 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.052 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.052 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.052 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.052 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.052 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.052 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.052 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.052 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.052 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.052 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.052 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.052 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.052 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.052 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.052 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.052 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.052 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.052 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.052 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.052 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.052 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.052 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.052 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.052 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.052 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.052 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.052 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.052 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.052 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.052 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.052 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.052 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.052 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.052 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.052 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.052 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.052 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.052 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.052 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.052 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.052 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.052 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.052 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.052 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.052 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.052 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.052 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.052 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.052 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.052 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.052 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.052 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.052 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.052 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.052 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.052 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.052 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.052 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.052 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.052 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.052 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.052 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.052 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.052 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.052 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.052 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 512 00:05:14.052 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:14.052 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:14.052 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:14.052 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:05:14.052 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:14.052 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:14.052 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:14.052 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:14.052 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:14.052 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:14.052 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:14.052 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:14.052 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:05:14.052 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:14.052 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:14.052 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:14.052 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:14.052 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:14.052 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:14.052 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:14.053 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.053 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.053 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8854620 kB' 'MemUsed: 3387352 kB' 'SwapCached: 0 kB' 'Active: 493004 kB' 'Inactive: 1556076 kB' 'Active(anon): 131596 kB' 'Inactive(anon): 0 kB' 'Active(file): 361408 kB' 'Inactive(file): 1556076 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'FilePages: 1927948 kB' 'Mapped: 48940 kB' 'AnonPages: 122504 kB' 'Shmem: 10464 kB' 'KernelStack: 6444 kB' 'PageTables: 4204 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 67800 kB' 'Slab: 141216 kB' 'SReclaimable: 67800 kB' 'SUnreclaim: 73416 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:14.053 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.053 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.053 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.053 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.053 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.053 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.053 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.053 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.053 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.053 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.053 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.053 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.053 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.053 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.053 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.053 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.053 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.053 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.053 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.053 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.053 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.053 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.053 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.053 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.053 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.053 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.053 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.053 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.053 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.053 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.053 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.053 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.053 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.053 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.053 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.053 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.053 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.053 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.053 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.053 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.053 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.053 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.053 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.053 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.053 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.053 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.053 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.053 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.053 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.053 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.053 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.053 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.053 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.053 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.053 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.053 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.053 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.053 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.053 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.053 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.053 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.053 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.053 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.053 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.053 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.053 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.053 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.053 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.053 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.053 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.053 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.053 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.053 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.053 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.053 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.053 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.053 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.053 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.053 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.053 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.053 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.053 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.053 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.053 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.053 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.053 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.053 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.053 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.053 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.053 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.053 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.053 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.053 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.053 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.053 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.053 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.053 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.053 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.053 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.053 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.053 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.053 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.053 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.054 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.054 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.054 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.054 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.054 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.054 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.054 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.054 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.054 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.054 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.054 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.054 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.054 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.054 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.054 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.054 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.054 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.054 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.054 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.054 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.054 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.054 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.054 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.054 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.054 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.054 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.054 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.054 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.054 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.054 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.054 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.054 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.054 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.054 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.054 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.054 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.054 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.054 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.054 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:14.054 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.054 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.054 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.054 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:14.054 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:14.054 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:14.054 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:14.054 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:14.054 node0=512 expecting 512 00:05:14.054 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:14.054 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:14.054 ************************************ 00:05:14.054 END TEST per_node_1G_alloc 00:05:14.054 ************************************ 00:05:14.054 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:14.054 00:05:14.054 real 0m0.571s 00:05:14.054 user 0m0.280s 00:05:14.054 sys 0m0.301s 00:05:14.054 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:14.054 18:12:25 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:14.054 18:12:25 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:14.054 18:12:25 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:05:14.054 18:12:25 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:14.054 18:12:25 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:14.054 18:12:25 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:14.054 ************************************ 00:05:14.054 START TEST even_2G_alloc 00:05:14.054 ************************************ 00:05:14.054 18:12:25 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:05:14.054 18:12:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:05:14.054 18:12:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:14.054 18:12:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:14.054 18:12:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:14.054 18:12:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:14.054 18:12:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:14.054 18:12:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:14.054 18:12:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:14.054 18:12:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:14.054 18:12:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:14.054 18:12:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:14.054 18:12:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:14.054 18:12:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:14.054 18:12:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:14.054 18:12:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:14.054 18:12:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:05:14.054 18:12:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:14.054 18:12:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:14.054 18:12:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:14.054 18:12:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:05:14.054 18:12:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:05:14.054 18:12:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:05:14.054 18:12:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:14.054 18:12:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:14.313 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:14.576 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:14.576 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:14.576 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:05:14.576 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:05:14.576 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:14.576 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:14.576 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:14.576 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:14.576 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:14.576 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:14.576 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:14.576 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:14.576 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:14.576 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:14.576 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:14.576 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:14.576 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:14.576 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:14.576 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:14.576 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:14.576 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.576 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.577 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7805952 kB' 'MemAvailable: 9520400 kB' 'Buffers: 2436 kB' 'Cached: 1925516 kB' 'SwapCached: 0 kB' 'Active: 493060 kB' 'Inactive: 1556080 kB' 'Active(anon): 131652 kB' 'Inactive(anon): 0 kB' 'Active(file): 361408 kB' 'Inactive(file): 1556080 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 123036 kB' 'Mapped: 48936 kB' 'Shmem: 10464 kB' 'KReclaimable: 67800 kB' 'Slab: 141256 kB' 'SReclaimable: 67800 kB' 'SUnreclaim: 73456 kB' 'KernelStack: 6456 kB' 'PageTables: 4156 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 352128 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54804 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 5085184 kB' 'DirectMap1G: 9437184 kB' 00:05:14.577 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.577 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.577 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.577 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.577 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.577 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.577 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.577 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.577 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.577 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.577 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.577 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.577 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.577 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.577 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.577 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.577 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.577 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.577 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.577 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.577 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.577 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.577 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.577 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.577 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.577 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.577 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.577 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.577 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.577 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.577 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.577 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.577 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.577 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.577 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.577 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.577 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.577 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.577 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.577 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.577 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.577 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.577 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.577 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.577 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.577 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.577 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.577 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.577 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.577 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.577 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.577 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.577 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.577 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.577 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.577 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.577 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.577 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.577 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.577 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.577 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.577 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.577 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.577 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.577 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.577 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.577 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.577 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.577 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.577 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.577 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.577 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.577 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.577 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.577 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.577 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.577 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.577 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.577 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.577 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.577 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.577 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.577 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.577 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.577 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.577 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.577 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.577 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.577 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.577 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.577 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.577 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.577 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.577 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.577 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.577 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.577 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.577 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.577 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.577 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.577 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.577 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.577 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.577 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.577 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.577 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.578 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.578 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.578 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.578 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.578 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.578 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.578 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.578 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.578 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.578 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.578 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.578 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.578 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.578 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.578 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.578 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.578 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.578 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.578 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.578 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.578 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.578 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.578 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.578 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.578 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.578 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.578 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.578 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.578 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.578 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.578 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.578 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.578 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.578 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.578 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.578 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.578 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.578 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.578 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.578 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.578 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.578 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.578 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.578 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.578 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.578 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.578 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.578 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.578 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.578 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.578 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.578 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.578 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.578 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.578 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.578 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:14.578 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:14.578 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:14.578 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:14.578 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:14.578 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:14.578 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:14.578 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:14.578 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:14.578 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:14.578 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:14.578 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:14.578 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:14.578 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.578 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.578 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7805700 kB' 'MemAvailable: 9520148 kB' 'Buffers: 2436 kB' 'Cached: 1925516 kB' 'SwapCached: 0 kB' 'Active: 492732 kB' 'Inactive: 1556080 kB' 'Active(anon): 131324 kB' 'Inactive(anon): 0 kB' 'Active(file): 361408 kB' 'Inactive(file): 1556080 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 122744 kB' 'Mapped: 48824 kB' 'Shmem: 10464 kB' 'KReclaimable: 67800 kB' 'Slab: 141260 kB' 'SReclaimable: 67800 kB' 'SUnreclaim: 73460 kB' 'KernelStack: 6432 kB' 'PageTables: 4228 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 352128 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54788 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 5085184 kB' 'DirectMap1G: 9437184 kB' 00:05:14.578 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.578 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.578 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.578 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.578 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.578 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.578 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.578 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.578 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.578 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.578 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.578 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.578 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.578 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.578 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.578 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.578 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.578 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.578 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.578 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.578 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.578 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.578 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.578 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.578 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.578 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.578 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.578 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.578 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.578 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.578 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.578 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.578 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.579 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.579 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.579 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.579 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.579 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.579 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.579 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.579 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.579 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.579 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.579 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.579 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.579 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.579 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.579 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.579 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.579 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.579 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.579 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.579 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.579 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.579 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.579 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.579 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.579 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.579 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.579 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.579 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.579 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.579 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.579 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.579 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.579 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.579 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.579 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.579 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.579 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.579 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.579 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.579 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.579 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.579 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.579 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.579 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.579 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.579 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.579 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.579 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.579 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.579 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.579 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.579 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.579 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.579 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.579 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.579 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.579 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.579 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.579 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.579 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.579 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.579 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.579 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.579 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.579 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.579 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.579 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.579 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.579 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.579 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.579 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.579 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.579 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.579 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.579 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.579 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.579 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.579 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.579 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.579 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.579 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.579 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.579 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.579 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.579 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.579 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.579 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.579 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.579 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.579 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.579 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.579 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.579 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.579 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.579 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.579 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.579 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.579 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.579 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.579 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.579 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.579 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.579 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.579 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.579 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.579 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.579 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.579 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.579 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.579 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.579 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.579 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.579 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.579 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.579 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.579 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.579 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.579 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.580 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.580 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.580 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.580 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.580 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.580 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.580 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.580 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.580 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.580 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.580 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.580 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.580 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.580 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.580 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.580 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.580 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.580 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.580 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.580 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.580 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.580 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.580 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.580 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.580 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.580 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.580 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.580 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.580 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.580 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.580 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.580 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.580 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.580 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.580 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.580 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.580 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.580 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.580 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.580 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.580 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.580 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.580 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.580 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.580 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.580 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.580 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.580 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.580 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.580 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.580 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.580 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.580 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.580 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.580 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:14.580 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:14.580 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:14.580 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:14.580 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:14.580 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:14.580 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:14.580 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:14.580 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:14.580 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:14.580 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:14.580 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:14.580 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:14.580 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.580 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7805700 kB' 'MemAvailable: 9520148 kB' 'Buffers: 2436 kB' 'Cached: 1925516 kB' 'SwapCached: 0 kB' 'Active: 492736 kB' 'Inactive: 1556080 kB' 'Active(anon): 131328 kB' 'Inactive(anon): 0 kB' 'Active(file): 361408 kB' 'Inactive(file): 1556080 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 122704 kB' 'Mapped: 48824 kB' 'Shmem: 10464 kB' 'KReclaimable: 67800 kB' 'Slab: 141260 kB' 'SReclaimable: 67800 kB' 'SUnreclaim: 73460 kB' 'KernelStack: 6416 kB' 'PageTables: 4180 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 352128 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54788 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 5085184 kB' 'DirectMap1G: 9437184 kB' 00:05:14.580 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.580 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.580 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.580 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.580 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.580 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.580 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.580 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.580 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.580 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.580 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.580 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.580 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.580 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.580 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.580 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.580 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.580 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.580 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.580 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.580 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.580 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.580 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.580 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.580 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.580 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.580 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.580 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.580 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.580 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.580 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.580 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.580 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.581 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.581 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.581 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.581 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.581 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.581 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.581 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.581 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.581 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.581 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.581 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.581 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.581 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.581 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.581 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.581 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.581 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.581 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.581 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.581 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.581 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.581 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.581 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.581 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.581 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.581 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.581 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.581 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.581 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.581 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.581 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.581 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.581 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.581 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.581 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.581 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.581 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.581 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.581 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.581 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.581 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.581 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.581 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.581 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.581 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.581 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.581 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.581 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.581 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.581 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.581 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.581 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.581 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.581 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.581 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.581 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.581 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.581 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.581 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.581 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.581 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.581 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.581 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.581 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.581 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.581 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.581 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.581 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.581 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.581 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.581 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.581 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.581 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.581 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.581 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.581 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.581 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.581 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.581 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.581 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.581 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.581 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.581 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.581 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.581 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.581 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.581 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.581 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.581 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.581 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.581 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.581 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.581 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.581 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.582 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.582 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.582 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.582 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.582 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.582 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.582 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.582 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.582 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.582 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.582 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.582 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.582 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.582 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.582 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.582 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.582 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.582 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.582 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.582 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.582 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.582 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.582 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.582 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.582 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.582 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.582 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.582 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.582 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.582 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.582 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.582 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.582 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.582 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.582 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.582 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.582 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.582 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.582 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.582 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.582 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.582 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.582 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.582 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.582 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.582 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.582 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.582 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.582 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.582 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.582 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.582 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.582 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.582 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.582 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.582 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.582 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.582 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.582 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.582 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.582 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.582 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.582 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.582 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.582 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.582 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.582 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.582 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.582 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.582 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.582 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.582 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.582 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.582 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.582 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.582 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:14.582 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:14.582 nr_hugepages=1024 00:05:14.582 resv_hugepages=0 00:05:14.582 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:14.582 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:14.582 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:14.582 surplus_hugepages=0 00:05:14.582 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:14.582 anon_hugepages=0 00:05:14.582 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:14.582 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:14.582 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:14.582 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:14.582 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:14.582 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:14.582 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:14.582 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:14.582 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:14.582 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:14.582 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:14.582 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:14.582 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:14.582 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.582 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.582 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7805700 kB' 'MemAvailable: 9520148 kB' 'Buffers: 2436 kB' 'Cached: 1925516 kB' 'SwapCached: 0 kB' 'Active: 492840 kB' 'Inactive: 1556080 kB' 'Active(anon): 131432 kB' 'Inactive(anon): 0 kB' 'Active(file): 361408 kB' 'Inactive(file): 1556080 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 122656 kB' 'Mapped: 48824 kB' 'Shmem: 10464 kB' 'KReclaimable: 67800 kB' 'Slab: 141260 kB' 'SReclaimable: 67800 kB' 'SUnreclaim: 73460 kB' 'KernelStack: 6416 kB' 'PageTables: 4180 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 352128 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54804 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 5085184 kB' 'DirectMap1G: 9437184 kB' 00:05:14.582 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.582 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.583 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.583 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.583 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.583 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.583 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.583 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.583 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.583 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.583 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.583 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.583 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.583 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.583 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.583 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.583 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.583 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.583 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.583 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.583 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.583 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.583 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.583 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.583 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.583 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.583 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.583 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.583 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.583 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.583 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.583 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.583 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.583 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.583 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.583 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.583 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.583 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.583 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.583 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.583 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.583 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.583 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.583 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.583 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.583 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.583 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.583 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.583 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.583 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.583 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.583 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.583 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.583 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.583 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.583 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.583 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.583 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.583 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.583 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.583 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.583 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.583 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.583 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.583 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.583 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.583 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.583 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.583 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.583 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.583 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.583 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.583 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.583 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.583 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.583 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.583 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.583 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.583 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.583 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.583 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.583 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.583 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.583 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.583 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.583 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.583 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.583 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.583 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.583 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.583 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.583 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.583 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.583 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.583 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.583 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.583 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.583 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.583 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.583 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.583 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.583 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.583 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.583 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.583 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.583 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.583 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.583 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.583 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.583 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.583 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.583 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.583 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.583 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.583 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.583 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.583 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.583 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.584 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.584 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.584 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.584 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.584 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.584 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.584 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.584 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.584 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.584 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.584 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.584 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.584 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.584 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.584 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.584 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.584 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.584 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.584 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.584 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.584 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.584 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.584 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.584 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.584 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.584 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.584 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.584 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.584 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.584 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.584 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.584 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.584 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.584 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.584 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.584 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.584 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.584 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.584 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.584 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.584 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.584 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.584 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.584 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.584 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.584 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.584 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.584 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.584 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.584 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.584 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.584 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.584 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.584 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.584 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.584 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.584 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.584 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.584 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.584 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.584 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.584 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.584 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.584 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.584 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.584 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.584 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.584 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.584 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.584 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.584 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.584 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.584 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.584 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.584 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.584 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:05:14.584 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:14.584 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:14.584 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:14.584 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:05:14.584 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:14.584 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:14.584 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:14.584 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:14.584 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:14.584 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:14.584 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:14.584 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:14.584 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:05:14.584 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:14.584 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:14.584 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:14.584 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:14.584 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:14.584 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:14.584 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:14.584 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.584 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7805448 kB' 'MemUsed: 4436524 kB' 'SwapCached: 0 kB' 'Active: 492696 kB' 'Inactive: 1556080 kB' 'Active(anon): 131288 kB' 'Inactive(anon): 0 kB' 'Active(file): 361408 kB' 'Inactive(file): 1556080 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'FilePages: 1927952 kB' 'Mapped: 48824 kB' 'AnonPages: 122656 kB' 'Shmem: 10464 kB' 'KernelStack: 6416 kB' 'PageTables: 4180 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 67800 kB' 'Slab: 141256 kB' 'SReclaimable: 67800 kB' 'SUnreclaim: 73456 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:14.584 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.584 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.584 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.584 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.584 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.584 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.584 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.584 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.584 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.585 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.585 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.585 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.585 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.585 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.585 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.585 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.585 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.585 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.585 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.585 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.585 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.585 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.585 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.585 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.585 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.585 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.585 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.585 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.585 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.585 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.585 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.585 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.585 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.585 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.585 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.585 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.585 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.585 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.585 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.585 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.585 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.585 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.585 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.585 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.585 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.585 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.585 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.585 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.585 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.585 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.585 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.585 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.585 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.585 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.585 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.585 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.585 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.585 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.585 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.585 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.585 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.585 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.585 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.585 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.585 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.585 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.585 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.585 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.585 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.585 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.585 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.585 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.585 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.585 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.585 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.585 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.585 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.585 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.585 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.585 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.585 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.585 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.585 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.585 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.585 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.585 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.585 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.585 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.585 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.585 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.585 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.585 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.585 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.585 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.585 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.585 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.585 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.585 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.585 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.585 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.585 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.585 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.585 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.585 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.585 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.585 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.585 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.585 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.585 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.585 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.585 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.585 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.585 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.585 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.585 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.585 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.585 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.585 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.585 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.585 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.585 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.585 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.585 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.585 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.586 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.586 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.586 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.586 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.586 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.586 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.586 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.586 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.586 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.586 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.586 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.586 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.586 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.586 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.586 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.586 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.586 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.586 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.586 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:14.586 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.586 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.586 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.586 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:14.586 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:14.586 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:14.586 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:14.586 node0=1024 expecting 1024 00:05:14.586 ************************************ 00:05:14.586 END TEST even_2G_alloc 00:05:14.586 ************************************ 00:05:14.586 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:14.586 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:14.586 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:14.586 18:12:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:14.586 00:05:14.586 real 0m0.569s 00:05:14.586 user 0m0.275s 00:05:14.586 sys 0m0.290s 00:05:14.586 18:12:26 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:14.586 18:12:26 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:14.844 18:12:26 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:14.844 18:12:26 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:05:14.844 18:12:26 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:14.844 18:12:26 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:14.844 18:12:26 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:14.844 ************************************ 00:05:14.844 START TEST odd_alloc 00:05:14.844 ************************************ 00:05:14.844 18:12:26 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:05:14.844 18:12:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:05:14.844 18:12:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:05:14.844 18:12:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:14.844 18:12:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:14.844 18:12:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:05:14.844 18:12:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:14.844 18:12:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:14.844 18:12:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:14.844 18:12:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:05:14.844 18:12:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:14.844 18:12:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:14.844 18:12:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:14.844 18:12:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:14.844 18:12:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:14.844 18:12:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:14.844 18:12:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:05:14.844 18:12:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:14.844 18:12:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:14.844 18:12:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:14.844 18:12:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:05:14.844 18:12:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:05:14.844 18:12:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:05:14.844 18:12:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:14.845 18:12:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:15.110 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:15.110 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:15.110 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:15.110 18:12:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:05:15.110 18:12:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:05:15.110 18:12:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:15.110 18:12:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:15.110 18:12:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:15.110 18:12:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:15.110 18:12:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:15.110 18:12:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:15.110 18:12:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:15.110 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:15.110 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:15.110 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:15.110 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:15.110 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:15.110 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:15.110 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:15.110 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:15.110 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:15.110 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.110 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.111 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7800144 kB' 'MemAvailable: 9514592 kB' 'Buffers: 2436 kB' 'Cached: 1925516 kB' 'SwapCached: 0 kB' 'Active: 493020 kB' 'Inactive: 1556080 kB' 'Active(anon): 131612 kB' 'Inactive(anon): 0 kB' 'Active(file): 361408 kB' 'Inactive(file): 1556080 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 122724 kB' 'Mapped: 48944 kB' 'Shmem: 10464 kB' 'KReclaimable: 67800 kB' 'Slab: 141288 kB' 'SReclaimable: 67800 kB' 'SUnreclaim: 73488 kB' 'KernelStack: 6448 kB' 'PageTables: 4284 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 352128 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54804 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 5085184 kB' 'DirectMap1G: 9437184 kB' 00:05:15.111 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.111 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.111 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.111 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.111 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.111 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.111 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.111 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.111 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.111 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.111 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.111 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.111 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.111 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.111 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.111 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.111 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.111 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.111 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.111 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.111 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.111 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.111 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.111 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.111 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.111 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.111 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.111 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.111 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.111 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.111 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.111 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.111 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.111 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.111 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.111 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.111 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.111 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.111 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.111 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.111 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.111 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.111 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.111 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.111 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.111 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.111 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.111 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.111 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.111 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.111 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.111 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.111 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.111 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.111 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.111 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.111 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.111 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.111 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.111 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.111 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.111 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.111 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.111 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.111 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.111 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.111 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.111 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.111 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.111 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.111 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.111 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.111 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.111 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.111 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.111 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.111 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.111 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.111 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.111 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.111 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.111 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.111 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.111 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.111 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.111 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.111 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.111 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.111 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.111 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.111 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.111 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.111 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.111 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.111 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.111 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.111 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.111 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.111 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.111 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.111 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.111 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.111 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.111 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.111 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.111 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.111 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.111 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.111 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.111 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.111 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.111 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.111 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.111 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.111 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.111 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.111 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.111 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.111 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.112 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.112 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.112 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.112 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.112 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.112 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.112 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.112 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.112 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.112 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.112 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.112 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.112 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.112 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.112 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.112 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.112 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.112 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.112 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.112 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.112 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.112 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.112 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.112 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.112 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.112 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.112 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.112 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.112 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.112 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.112 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.112 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.112 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.112 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.112 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.112 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.112 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.112 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.112 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.112 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.112 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.112 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.112 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:15.112 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:15.112 18:12:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:15.112 18:12:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:15.112 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:15.112 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:15.112 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:15.112 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:15.112 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:15.112 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:15.112 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:15.112 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:15.112 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:15.112 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.112 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7800144 kB' 'MemAvailable: 9514592 kB' 'Buffers: 2436 kB' 'Cached: 1925516 kB' 'SwapCached: 0 kB' 'Active: 492956 kB' 'Inactive: 1556080 kB' 'Active(anon): 131548 kB' 'Inactive(anon): 0 kB' 'Active(file): 361408 kB' 'Inactive(file): 1556080 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 122916 kB' 'Mapped: 48828 kB' 'Shmem: 10464 kB' 'KReclaimable: 67800 kB' 'Slab: 141288 kB' 'SReclaimable: 67800 kB' 'SUnreclaim: 73488 kB' 'KernelStack: 6416 kB' 'PageTables: 4184 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 352128 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54788 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 5085184 kB' 'DirectMap1G: 9437184 kB' 00:05:15.112 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.112 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.112 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.112 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.112 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.112 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.112 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.112 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.112 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.112 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.112 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.112 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.112 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.112 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.112 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.112 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.112 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.112 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.112 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.112 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.112 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.112 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.112 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.112 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.112 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.112 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.112 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.112 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.112 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.112 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.112 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.112 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.112 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.112 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.112 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.112 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.112 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.112 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.112 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.112 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.112 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.112 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.112 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.112 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.112 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.112 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.112 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.112 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.112 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.112 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.112 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.113 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.113 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.113 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.113 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.113 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.113 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.113 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.113 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.113 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.113 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.113 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.113 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.113 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.113 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.113 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.113 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.113 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.113 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.113 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.113 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.113 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.113 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.113 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.113 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.113 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.113 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.113 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.113 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.113 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.113 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.113 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.113 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.113 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.113 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.113 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.113 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.113 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.113 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.113 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.113 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.113 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.113 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.113 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.113 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.113 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.113 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.113 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.113 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.113 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.113 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.113 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.113 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.113 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.113 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.113 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.113 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.113 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.113 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.113 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.113 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.113 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.113 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.113 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.113 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.113 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.113 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.113 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.113 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.113 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.113 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.113 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.113 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.113 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.113 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.113 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.113 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.113 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.113 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.113 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.113 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.113 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.113 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.113 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.113 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.113 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.113 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.113 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.113 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.113 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.113 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.113 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.113 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.113 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.113 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.113 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.113 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.113 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.113 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.113 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.113 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.113 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.113 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.113 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.113 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.113 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.113 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.113 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.113 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.113 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.113 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.113 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.113 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.113 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.113 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.113 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.113 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.113 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.113 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.113 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.113 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.113 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.114 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.114 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.114 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.114 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.114 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.114 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.114 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.114 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.114 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.114 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.114 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.114 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.114 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.114 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.114 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.114 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.114 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.114 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.114 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.114 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.114 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.114 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.114 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.114 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.114 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.114 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.114 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.114 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.114 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.114 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.114 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.114 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.114 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.114 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.114 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:15.114 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:15.114 18:12:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:15.114 18:12:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:15.114 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:15.114 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:15.114 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:15.114 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:15.114 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:15.114 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:15.114 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:15.114 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:15.114 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:15.114 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.114 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.114 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7800144 kB' 'MemAvailable: 9514592 kB' 'Buffers: 2436 kB' 'Cached: 1925516 kB' 'SwapCached: 0 kB' 'Active: 493044 kB' 'Inactive: 1556080 kB' 'Active(anon): 131636 kB' 'Inactive(anon): 0 kB' 'Active(file): 361408 kB' 'Inactive(file): 1556080 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 122756 kB' 'Mapped: 48828 kB' 'Shmem: 10464 kB' 'KReclaimable: 67800 kB' 'Slab: 141284 kB' 'SReclaimable: 67800 kB' 'SUnreclaim: 73484 kB' 'KernelStack: 6416 kB' 'PageTables: 4192 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 352128 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54788 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 5085184 kB' 'DirectMap1G: 9437184 kB' 00:05:15.114 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.114 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.114 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.114 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.114 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.114 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.114 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.114 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.114 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.114 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.114 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.114 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.114 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.114 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.114 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.114 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.114 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.114 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.114 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.114 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.114 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.114 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.114 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.114 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.114 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.114 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.114 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.114 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.114 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.114 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.114 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.114 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.114 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.114 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.114 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.114 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.114 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.114 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.114 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.114 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.114 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.114 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.114 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.114 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.114 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.114 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.114 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.114 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.114 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.114 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.114 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.114 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.114 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.114 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.114 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.114 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.114 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.114 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.115 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.115 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.115 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.115 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.115 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.115 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.115 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.115 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.115 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.115 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.115 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.115 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.115 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.115 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.115 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.115 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.115 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.115 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.115 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.115 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.115 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.115 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.115 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.115 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.115 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.115 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.115 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.115 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.115 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.115 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.115 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.115 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.115 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.115 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.115 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.115 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.115 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.115 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.115 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.115 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.115 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.115 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.115 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.115 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.115 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.115 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.115 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.115 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.115 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.115 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.115 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.115 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.115 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.115 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.115 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.115 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.115 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.115 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.115 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.115 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.115 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.115 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.115 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.115 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.115 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.115 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.115 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.115 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.115 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.115 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.115 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.115 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.115 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.115 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.115 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.115 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.115 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.115 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.115 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.115 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.115 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.115 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.115 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.115 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.115 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.115 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.115 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.115 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.115 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.115 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.115 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.115 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.115 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.116 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.116 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.116 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.116 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.116 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.116 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.116 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.116 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.116 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.116 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.116 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.116 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.116 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.116 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.116 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.116 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.116 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.116 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.116 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.116 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.116 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.116 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.116 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.116 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.116 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.116 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.116 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.116 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.116 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.116 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.116 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.116 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.116 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.116 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.116 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.116 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.116 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.116 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.116 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.116 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.116 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.116 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.116 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.116 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.116 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.116 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.116 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.116 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.116 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.116 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.379 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:15.379 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:15.379 18:12:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:15.379 nr_hugepages=1025 00:05:15.379 18:12:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:05:15.379 resv_hugepages=0 00:05:15.379 18:12:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:15.379 surplus_hugepages=0 00:05:15.379 18:12:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:15.379 anon_hugepages=0 00:05:15.379 18:12:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:15.379 18:12:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:15.379 18:12:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:05:15.379 18:12:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:15.379 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:15.379 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:15.379 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:15.379 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:15.379 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:15.380 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:15.380 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:15.380 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:15.380 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:15.380 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.380 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.380 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7800828 kB' 'MemAvailable: 9515276 kB' 'Buffers: 2436 kB' 'Cached: 1925516 kB' 'SwapCached: 0 kB' 'Active: 493640 kB' 'Inactive: 1556080 kB' 'Active(anon): 132232 kB' 'Inactive(anon): 0 kB' 'Active(file): 361408 kB' 'Inactive(file): 1556080 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 123152 kB' 'Mapped: 48828 kB' 'Shmem: 10464 kB' 'KReclaimable: 67800 kB' 'Slab: 141288 kB' 'SReclaimable: 67800 kB' 'SUnreclaim: 73488 kB' 'KernelStack: 6464 kB' 'PageTables: 4340 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 359072 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54804 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 5085184 kB' 'DirectMap1G: 9437184 kB' 00:05:15.380 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.380 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.380 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.380 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.380 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.380 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.380 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.380 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.380 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.380 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.380 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.380 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.380 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.380 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.380 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.380 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.380 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.380 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.380 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.380 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.380 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.380 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.380 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.380 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.380 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.380 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.380 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.380 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.380 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.380 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.380 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.380 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.380 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.380 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.380 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.380 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.380 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.380 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.380 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.380 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.380 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.380 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.380 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.380 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.380 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.380 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.380 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.380 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.380 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.380 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.380 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.380 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.380 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.380 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.380 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.380 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.380 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.380 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.380 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.380 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.380 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.380 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.380 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.380 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.380 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.380 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.380 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.380 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.380 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.380 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.380 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.380 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.380 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.380 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.380 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.380 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.380 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.380 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.380 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.380 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.380 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.380 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.380 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.380 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.380 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.380 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.380 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.380 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.380 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.380 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.380 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.380 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.380 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.380 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.380 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.380 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.380 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.380 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.380 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.380 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.381 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.381 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.381 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.381 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.381 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.381 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.381 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.381 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.381 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.381 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.381 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.381 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.381 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.381 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.381 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.381 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.381 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.381 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.381 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.381 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.381 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.381 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.381 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.381 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.381 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.381 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.381 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.381 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.381 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.381 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.381 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.381 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.381 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.381 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.381 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.381 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.381 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.381 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.381 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.381 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.381 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.381 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.381 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.381 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.381 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.381 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.381 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.381 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.381 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.381 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.381 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.381 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.381 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.381 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.381 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.381 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.381 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.381 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.381 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.381 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.381 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.381 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.381 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.381 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.381 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.381 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.381 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.381 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.381 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.381 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.381 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.381 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.381 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.381 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.381 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.381 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.381 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.381 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.381 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.381 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.381 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.381 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.381 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.381 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.381 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.381 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.381 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.381 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.381 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.381 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.381 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.381 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.381 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.381 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:05:15.381 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:15.381 18:12:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:15.381 18:12:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:15.381 18:12:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:05:15.381 18:12:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:15.381 18:12:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:05:15.381 18:12:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:15.381 18:12:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:15.381 18:12:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:15.381 18:12:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:15.381 18:12:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:15.381 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:15.381 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:05:15.381 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:15.381 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:15.381 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:15.381 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:15.381 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:15.381 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:15.381 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:15.381 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.381 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.382 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7800828 kB' 'MemUsed: 4441144 kB' 'SwapCached: 0 kB' 'Active: 492780 kB' 'Inactive: 1556080 kB' 'Active(anon): 131372 kB' 'Inactive(anon): 0 kB' 'Active(file): 361408 kB' 'Inactive(file): 1556080 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'FilePages: 1927952 kB' 'Mapped: 48828 kB' 'AnonPages: 122532 kB' 'Shmem: 10464 kB' 'KernelStack: 6416 kB' 'PageTables: 4192 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 67800 kB' 'Slab: 141272 kB' 'SReclaimable: 67800 kB' 'SUnreclaim: 73472 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:05:15.382 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.382 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.382 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.382 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.382 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.382 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.382 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.382 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.382 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.382 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.382 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.382 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.382 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.382 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.382 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.382 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.382 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.382 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.382 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.382 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.382 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.382 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.382 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.382 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.382 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.382 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.382 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.382 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.382 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.382 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.382 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.382 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.382 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.382 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.382 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.382 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.382 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.382 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.382 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.382 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.382 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.382 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.382 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.382 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.382 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.382 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.382 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.382 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.382 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.382 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.382 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.382 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.382 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.382 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.382 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.382 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.382 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.382 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.382 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.382 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.382 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.382 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.382 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.382 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.382 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.382 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.382 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.382 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.382 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.382 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.382 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.382 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.382 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.382 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.382 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.382 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.382 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.382 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.382 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.382 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.382 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.382 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.382 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.382 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.382 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.382 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.382 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.382 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.382 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.382 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.382 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.382 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.382 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.382 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.382 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.382 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.382 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.382 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.382 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.382 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.382 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.382 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.382 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.382 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.382 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.382 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.382 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.382 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.382 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.382 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.382 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.382 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.382 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.382 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.382 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.383 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.383 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.383 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.383 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.383 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.383 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.383 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.383 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.383 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.383 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.383 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.383 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.383 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.383 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.383 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.383 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.383 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.383 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.383 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.383 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.383 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.383 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.383 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.383 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.383 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.383 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.383 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:15.383 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.383 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.383 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.383 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:15.383 18:12:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:15.383 18:12:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:15.383 18:12:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:15.383 18:12:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:15.383 18:12:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:15.383 node0=1025 expecting 1025 00:05:15.383 18:12:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:05:15.383 18:12:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:05:15.383 00:05:15.383 real 0m0.564s 00:05:15.383 user 0m0.260s 00:05:15.383 sys 0m0.314s 00:05:15.383 18:12:27 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:15.383 18:12:27 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:15.383 ************************************ 00:05:15.383 END TEST odd_alloc 00:05:15.383 ************************************ 00:05:15.383 18:12:27 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:15.383 18:12:27 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:05:15.383 18:12:27 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:15.383 18:12:27 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:15.383 18:12:27 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:15.383 ************************************ 00:05:15.383 START TEST custom_alloc 00:05:15.383 ************************************ 00:05:15.383 18:12:27 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:05:15.383 18:12:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:05:15.383 18:12:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:05:15.383 18:12:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:05:15.383 18:12:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:05:15.383 18:12:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:05:15.383 18:12:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:05:15.383 18:12:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:05:15.383 18:12:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:15.383 18:12:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:15.383 18:12:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:15.383 18:12:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:15.383 18:12:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:15.383 18:12:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:15.383 18:12:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:15.383 18:12:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:15.383 18:12:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:15.383 18:12:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:15.383 18:12:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:15.383 18:12:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:15.383 18:12:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:15.383 18:12:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:15.383 18:12:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:15.383 18:12:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:15.383 18:12:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:15.383 18:12:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:05:15.383 18:12:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:05:15.383 18:12:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:05:15.383 18:12:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:05:15.383 18:12:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:05:15.383 18:12:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:05:15.383 18:12:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:15.383 18:12:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:15.383 18:12:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:15.383 18:12:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:15.383 18:12:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:15.383 18:12:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:15.383 18:12:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:15.383 18:12:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:05:15.383 18:12:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:15.383 18:12:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:05:15.383 18:12:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:05:15.383 18:12:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:05:15.383 18:12:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:05:15.383 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:15.383 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:15.645 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:15.645 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:15.645 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:15.645 18:12:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:05:15.645 18:12:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:05:15.645 18:12:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:05:15.645 18:12:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:15.645 18:12:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:15.645 18:12:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:15.645 18:12:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:15.645 18:12:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:15.645 18:12:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:15.645 18:12:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:15.645 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:15.645 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:15.645 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:15.645 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:15.645 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:15.645 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:15.645 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:15.645 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:15.645 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:15.645 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.645 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.645 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8853456 kB' 'MemAvailable: 10567908 kB' 'Buffers: 2436 kB' 'Cached: 1925520 kB' 'SwapCached: 0 kB' 'Active: 493132 kB' 'Inactive: 1556084 kB' 'Active(anon): 131724 kB' 'Inactive(anon): 0 kB' 'Active(file): 361408 kB' 'Inactive(file): 1556084 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 122840 kB' 'Mapped: 48956 kB' 'Shmem: 10464 kB' 'KReclaimable: 67800 kB' 'Slab: 141280 kB' 'SReclaimable: 67800 kB' 'SUnreclaim: 73480 kB' 'KernelStack: 6456 kB' 'PageTables: 4156 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 352128 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54772 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 5085184 kB' 'DirectMap1G: 9437184 kB' 00:05:15.645 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.645 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.645 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.645 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.645 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.645 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.645 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.645 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.645 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.645 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.645 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.645 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.645 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.645 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.645 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.645 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.645 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.645 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.645 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.645 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.645 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.645 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.645 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.645 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.645 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.645 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.645 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.645 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.645 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.645 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.645 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.645 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.645 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.645 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.645 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.645 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.645 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.645 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.645 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.645 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.645 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.645 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.645 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.645 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.645 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.645 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.645 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.645 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.645 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.645 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.645 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.645 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.645 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.645 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.645 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.645 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.645 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.645 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.645 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.645 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.645 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.646 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.646 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.646 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.646 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.646 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.646 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.646 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.646 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.646 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.646 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.646 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.646 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.646 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.646 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.646 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.646 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.646 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.646 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.646 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.646 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.646 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.646 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.646 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.646 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.646 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.646 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.646 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.646 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.646 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.646 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.646 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.646 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.646 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.646 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.646 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.646 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.646 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.646 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.646 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.646 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.646 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.646 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.646 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.646 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.646 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.646 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.646 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.646 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.646 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.646 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.646 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.646 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.646 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.646 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.646 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.646 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.646 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.646 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.646 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.646 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.646 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.646 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.646 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.646 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.646 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.646 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.646 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.646 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.646 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.646 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.646 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.646 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.646 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.646 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.646 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.646 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.646 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.646 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.646 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.646 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.646 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.646 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.646 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.646 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.646 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.646 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.646 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.646 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.646 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.646 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.646 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.646 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.646 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.646 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.646 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.646 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.646 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.646 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.646 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.646 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.646 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:15.646 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:15.646 18:12:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:15.646 18:12:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:15.646 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:15.646 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:15.646 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:15.646 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:15.646 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:15.646 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:15.646 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:15.646 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:15.646 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:15.646 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.647 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8853456 kB' 'MemAvailable: 10567908 kB' 'Buffers: 2436 kB' 'Cached: 1925520 kB' 'SwapCached: 0 kB' 'Active: 492836 kB' 'Inactive: 1556084 kB' 'Active(anon): 131428 kB' 'Inactive(anon): 0 kB' 'Active(file): 361408 kB' 'Inactive(file): 1556084 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 122796 kB' 'Mapped: 48956 kB' 'Shmem: 10464 kB' 'KReclaimable: 67800 kB' 'Slab: 141276 kB' 'SReclaimable: 67800 kB' 'SUnreclaim: 73476 kB' 'KernelStack: 6408 kB' 'PageTables: 4016 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 352128 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54740 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 5085184 kB' 'DirectMap1G: 9437184 kB' 00:05:15.647 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.647 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.647 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.647 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.647 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.647 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.647 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.647 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.647 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.647 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.647 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.647 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.647 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.647 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.647 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.647 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.647 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.647 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.647 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.647 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.647 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.647 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.647 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.647 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.647 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.647 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.647 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.647 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.647 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.647 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.647 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.647 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.647 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.647 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.647 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.647 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.647 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.647 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.647 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.647 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.647 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.647 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.647 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.647 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.647 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.647 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.647 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.647 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.647 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.647 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.647 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.647 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.647 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.647 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.647 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.647 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.647 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.647 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.647 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.647 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.647 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.647 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.647 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.647 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.647 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.647 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.647 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.647 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.647 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.647 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.647 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.647 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.647 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.647 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.647 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.647 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.647 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.647 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.647 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.647 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.647 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.647 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.647 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.647 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.647 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.647 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.647 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.647 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.647 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.647 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.647 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.647 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.647 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.647 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.647 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.647 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.647 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.647 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.647 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.647 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.647 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.647 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.647 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.648 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.648 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.648 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.648 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.648 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.648 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.648 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.648 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.648 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.648 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.648 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.648 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.648 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.648 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.648 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.648 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.648 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.648 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.648 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.648 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.648 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.648 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.648 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.648 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.648 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.648 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.648 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.648 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.648 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.648 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.648 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.648 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.648 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.648 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.648 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.648 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.648 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.648 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.648 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.648 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.648 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.648 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.648 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.648 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.648 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.648 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.648 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.648 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.648 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.648 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.648 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.648 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.648 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.648 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.648 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.648 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.648 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.648 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.648 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.648 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.648 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.648 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.648 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.648 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.648 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.648 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.648 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.648 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.648 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.648 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.648 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.648 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.648 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.648 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.648 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.648 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.648 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.648 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.648 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.648 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.648 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.648 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.648 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.648 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.648 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.648 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.648 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.648 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.648 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.648 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.648 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.648 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.648 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.648 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.648 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.648 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.648 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.648 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.648 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.648 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.648 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.648 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.648 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.648 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:15.648 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:15.648 18:12:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:15.910 18:12:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:15.910 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:15.910 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:15.910 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:15.910 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:15.910 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:15.910 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:15.910 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:15.910 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:15.910 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:15.910 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8853204 kB' 'MemAvailable: 10567656 kB' 'Buffers: 2436 kB' 'Cached: 1925520 kB' 'SwapCached: 0 kB' 'Active: 492936 kB' 'Inactive: 1556084 kB' 'Active(anon): 131528 kB' 'Inactive(anon): 0 kB' 'Active(file): 361408 kB' 'Inactive(file): 1556084 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 122904 kB' 'Mapped: 49088 kB' 'Shmem: 10464 kB' 'KReclaimable: 67800 kB' 'Slab: 141284 kB' 'SReclaimable: 67800 kB' 'SUnreclaim: 73484 kB' 'KernelStack: 6400 kB' 'PageTables: 4144 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 354872 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54756 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 5085184 kB' 'DirectMap1G: 9437184 kB' 00:05:15.910 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.910 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.910 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.910 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.910 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.910 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.910 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.910 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.910 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.910 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.910 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.910 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.910 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.910 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.910 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.910 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.910 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.910 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.910 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.910 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.910 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.910 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.910 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.910 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.910 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.910 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.911 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.911 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.911 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.911 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.911 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.911 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.911 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.911 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.911 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.911 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.911 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.911 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.911 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.911 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.911 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.911 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.911 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.911 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.911 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.911 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.911 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.911 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.911 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.911 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.911 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.911 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.911 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.911 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.911 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.911 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.911 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.911 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.911 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.911 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.911 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.911 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.911 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.911 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.911 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.911 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.911 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.911 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.911 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.911 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.911 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.911 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.911 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.911 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.911 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.911 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.911 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.911 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.911 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.911 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.911 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.911 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.911 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.911 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.911 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.911 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.911 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.911 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.911 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.911 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.911 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.911 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.911 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.911 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.911 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.911 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.911 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.911 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.911 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.911 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.911 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.911 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.911 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.911 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.911 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.911 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.911 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.911 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.911 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.911 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.911 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.911 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.911 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.911 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.911 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.911 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.911 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.911 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.911 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.911 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.911 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.911 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.911 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.911 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.911 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.911 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.911 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.911 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.911 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.911 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.911 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.911 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.911 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.911 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.911 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.911 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.911 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.911 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.911 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.911 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.911 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.911 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.911 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.911 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.911 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.912 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.912 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.912 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.912 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.912 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.912 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.912 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.912 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.912 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.912 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.912 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.912 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.912 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.912 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.912 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.912 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.912 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.912 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.912 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.912 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.912 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.912 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.912 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.912 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.912 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.912 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.912 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.912 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.912 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.912 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.912 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.912 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.912 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.912 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.912 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.912 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.912 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.912 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.912 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.912 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.912 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.912 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.912 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.912 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.912 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.912 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.912 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.912 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.912 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.912 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.912 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.912 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.912 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.912 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.912 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.912 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.912 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.912 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.912 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:15.912 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:15.912 18:12:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:15.912 nr_hugepages=512 00:05:15.912 18:12:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:15.912 resv_hugepages=0 00:05:15.912 18:12:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:15.912 surplus_hugepages=0 00:05:15.912 18:12:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:15.912 anon_hugepages=0 00:05:15.912 18:12:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:15.912 18:12:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:15.912 18:12:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:15.912 18:12:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:15.912 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:15.912 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:15.912 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:15.912 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:15.912 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:15.912 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:15.912 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:15.912 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:15.912 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:15.912 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.912 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.912 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8853204 kB' 'MemAvailable: 10567652 kB' 'Buffers: 2436 kB' 'Cached: 1925516 kB' 'SwapCached: 0 kB' 'Active: 492644 kB' 'Inactive: 1556080 kB' 'Active(anon): 131236 kB' 'Inactive(anon): 0 kB' 'Active(file): 361408 kB' 'Inactive(file): 1556080 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 122668 kB' 'Mapped: 48828 kB' 'Shmem: 10464 kB' 'KReclaimable: 67800 kB' 'Slab: 141260 kB' 'SReclaimable: 67800 kB' 'SUnreclaim: 73460 kB' 'KernelStack: 6368 kB' 'PageTables: 4044 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 352128 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54740 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 5085184 kB' 'DirectMap1G: 9437184 kB' 00:05:15.912 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.912 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.912 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.912 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.912 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.912 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.912 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.912 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.912 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.912 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.912 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.912 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.912 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.912 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.912 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.912 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.912 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.912 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.912 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.912 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.912 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.912 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.912 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.913 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.913 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.913 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.913 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.913 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.913 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.913 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.913 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.913 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.913 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.913 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.913 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.913 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.913 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.913 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.913 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.913 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.913 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.913 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.913 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.913 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.913 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.913 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.913 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.913 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.913 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.913 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.913 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.913 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.913 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.913 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.913 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.913 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.913 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.913 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.913 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.913 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.913 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.913 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.913 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.913 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.913 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.913 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.913 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.913 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.913 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.913 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.913 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.913 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.913 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.913 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.913 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.913 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.913 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.913 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.913 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.913 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.913 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.913 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.913 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.913 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.913 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.913 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.913 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.913 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.913 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.913 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.913 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.913 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.913 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.913 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.913 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.913 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.913 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.913 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.913 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.913 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.913 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.913 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.913 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.913 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.913 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.913 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.913 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.913 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.913 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.913 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.913 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.913 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.913 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.913 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.913 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.913 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.913 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.913 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.913 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.913 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.913 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.913 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.913 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.913 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.913 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.913 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.913 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.913 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.913 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.913 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.913 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.913 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.913 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.913 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.913 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.913 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.913 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.913 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.913 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.913 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.913 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.914 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.914 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.914 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.914 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.914 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.914 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.914 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.914 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.914 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.914 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.914 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.914 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.914 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.914 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.914 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.914 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.914 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.914 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.914 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.914 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.914 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.914 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.914 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.914 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.914 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.914 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.914 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.914 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.914 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.914 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.914 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.914 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.914 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.914 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.914 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.914 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.914 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.914 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.914 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.914 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.914 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.914 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.914 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.914 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.914 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.914 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.914 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.914 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.914 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.914 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.914 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.914 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.914 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 512 00:05:15.914 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:15.914 18:12:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:15.914 18:12:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:15.914 18:12:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:05:15.914 18:12:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:15.914 18:12:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:15.914 18:12:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:15.914 18:12:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:15.914 18:12:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:15.914 18:12:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:15.914 18:12:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:15.914 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:15.914 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:05:15.914 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:15.914 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:15.914 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:15.914 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:15.914 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:15.914 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:15.914 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:15.914 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.914 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8853204 kB' 'MemUsed: 3388768 kB' 'SwapCached: 0 kB' 'Active: 492744 kB' 'Inactive: 1556084 kB' 'Active(anon): 131336 kB' 'Inactive(anon): 0 kB' 'Active(file): 361408 kB' 'Inactive(file): 1556084 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'FilePages: 1927956 kB' 'Mapped: 48828 kB' 'AnonPages: 122568 kB' 'Shmem: 10464 kB' 'KernelStack: 6432 kB' 'PageTables: 4224 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 67800 kB' 'Slab: 141268 kB' 'SReclaimable: 67800 kB' 'SUnreclaim: 73468 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:15.914 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.914 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.914 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.914 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.914 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.914 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.914 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.914 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.914 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.914 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.914 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.914 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.914 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.914 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.914 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.914 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.914 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.914 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.914 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.914 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.914 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.914 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.914 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.914 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.914 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.914 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.914 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.914 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.914 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.914 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.914 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.914 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.914 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.915 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.915 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.915 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.915 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.915 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.915 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.915 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.915 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.915 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.915 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.915 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.915 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.915 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.915 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.915 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.915 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.915 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.915 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.915 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.915 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.915 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.915 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.915 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.915 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.915 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.915 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.915 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.915 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.915 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.915 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.915 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.915 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.915 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.915 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.915 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.915 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.915 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.915 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.915 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.915 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.915 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.915 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.915 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.915 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.915 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.915 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.915 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.915 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.915 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.915 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.915 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.915 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.915 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.915 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.915 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.915 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.915 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.915 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.915 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.915 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.915 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.915 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.915 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.915 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.915 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.915 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.915 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.915 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.915 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.915 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.915 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.915 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.915 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.915 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.915 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.915 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.915 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.915 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.915 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.915 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.915 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.915 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.915 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.915 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.915 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.915 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.915 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.915 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.915 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.915 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.915 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.915 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.915 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.915 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.915 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.915 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.916 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.916 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.916 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.916 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.916 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.916 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.916 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.916 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.916 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.916 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.916 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.916 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.916 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.916 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:15.916 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.916 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.916 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.916 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:15.916 18:12:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:15.916 18:12:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:15.916 18:12:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:15.916 18:12:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:15.916 18:12:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:15.916 node0=512 expecting 512 00:05:15.916 18:12:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:15.916 18:12:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:15.916 00:05:15.916 real 0m0.527s 00:05:15.916 user 0m0.243s 00:05:15.916 sys 0m0.319s 00:05:15.916 18:12:27 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:15.916 18:12:27 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:15.916 ************************************ 00:05:15.916 END TEST custom_alloc 00:05:15.916 ************************************ 00:05:15.916 18:12:27 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:15.916 18:12:27 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:05:15.916 18:12:27 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:15.916 18:12:27 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:15.916 18:12:27 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:15.916 ************************************ 00:05:15.916 START TEST no_shrink_alloc 00:05:15.916 ************************************ 00:05:15.916 18:12:27 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:05:15.916 18:12:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:05:15.916 18:12:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:15.916 18:12:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:15.916 18:12:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:05:15.916 18:12:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:15.916 18:12:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:05:15.916 18:12:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:15.916 18:12:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:15.916 18:12:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:15.916 18:12:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:15.916 18:12:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:15.916 18:12:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:15.916 18:12:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:15.916 18:12:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:15.916 18:12:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:15.916 18:12:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:15.916 18:12:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:15.916 18:12:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:15.916 18:12:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:05:15.916 18:12:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:05:15.916 18:12:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:15.916 18:12:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:16.174 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:16.174 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:16.174 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:16.437 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:05:16.437 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:16.437 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:16.437 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:16.437 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:16.437 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:16.437 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:16.437 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:16.437 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:16.437 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:16.437 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:16.437 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:16.437 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:16.437 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:16.437 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:16.437 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:16.437 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:16.437 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:16.437 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.437 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.437 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7808800 kB' 'MemAvailable: 9523252 kB' 'Buffers: 2436 kB' 'Cached: 1925520 kB' 'SwapCached: 0 kB' 'Active: 493412 kB' 'Inactive: 1556084 kB' 'Active(anon): 132004 kB' 'Inactive(anon): 0 kB' 'Active(file): 361408 kB' 'Inactive(file): 1556084 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 123184 kB' 'Mapped: 49016 kB' 'Shmem: 10464 kB' 'KReclaimable: 67800 kB' 'Slab: 141252 kB' 'SReclaimable: 67800 kB' 'SUnreclaim: 73452 kB' 'KernelStack: 6388 kB' 'PageTables: 4160 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 352128 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54788 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 5085184 kB' 'DirectMap1G: 9437184 kB' 00:05:16.437 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.437 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.437 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.437 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.437 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.437 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.437 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.438 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.438 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.438 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.438 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.438 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.438 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.438 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.438 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.438 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.438 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.438 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.438 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.438 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.438 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.438 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.438 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.438 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.438 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.438 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.438 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.438 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.438 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.438 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.438 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.438 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.438 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.438 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.438 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.438 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.438 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.438 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.438 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.438 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.438 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.438 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.438 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.438 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.438 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.438 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.438 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.438 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.438 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.438 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.438 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.438 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.438 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.438 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.438 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.438 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.438 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.438 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.438 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.438 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.438 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.438 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.438 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.438 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.438 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.438 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.438 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.438 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.438 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.438 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.438 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.438 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.438 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.438 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.438 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.438 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.438 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.438 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.438 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.438 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.438 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.438 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.438 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.438 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.438 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.438 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.438 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.438 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.438 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.438 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.438 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.438 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.438 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.438 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.438 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.438 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.438 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.438 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.438 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.438 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.438 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.438 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.438 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.438 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.438 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.438 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.438 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.438 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.438 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.438 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.438 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.438 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.438 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.438 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.438 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.438 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.438 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.438 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.438 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.438 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.438 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.438 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.439 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.439 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.439 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.439 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.439 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.439 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.439 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.439 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.439 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.439 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.439 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.439 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.439 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.439 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.439 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.439 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.439 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.439 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.439 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.439 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.439 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.439 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.439 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.439 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.439 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.439 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.439 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.439 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.439 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.439 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.439 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.439 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.439 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.439 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.439 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.439 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.439 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.439 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.439 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.439 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:16.439 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:16.439 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:16.439 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:16.439 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:16.439 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:16.439 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:16.439 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:16.439 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:16.439 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:16.439 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:16.439 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:16.439 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:16.439 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.439 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7808800 kB' 'MemAvailable: 9523252 kB' 'Buffers: 2436 kB' 'Cached: 1925520 kB' 'SwapCached: 0 kB' 'Active: 493080 kB' 'Inactive: 1556084 kB' 'Active(anon): 131672 kB' 'Inactive(anon): 0 kB' 'Active(file): 361408 kB' 'Inactive(file): 1556084 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 122836 kB' 'Mapped: 48960 kB' 'Shmem: 10464 kB' 'KReclaimable: 67800 kB' 'Slab: 141272 kB' 'SReclaimable: 67800 kB' 'SUnreclaim: 73472 kB' 'KernelStack: 6416 kB' 'PageTables: 4188 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 352128 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54772 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 5085184 kB' 'DirectMap1G: 9437184 kB' 00:05:16.439 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.439 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.439 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.439 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.439 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.439 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.439 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.439 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.439 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.439 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.439 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.439 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.439 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.439 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.439 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.439 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.439 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.439 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.439 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.439 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.439 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.439 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.439 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.439 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.439 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.439 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.439 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.439 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.439 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.439 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.439 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.439 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.439 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.439 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.439 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.439 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.440 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.440 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.440 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.440 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.440 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.440 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.440 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.440 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.440 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.440 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.440 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.440 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.440 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.440 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.440 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.440 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.440 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.440 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.440 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.440 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.440 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.440 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.440 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.440 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.440 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.440 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.440 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.440 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.440 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.440 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.440 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.440 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.440 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.440 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.440 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.440 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.440 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.440 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.440 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.440 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.440 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.440 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.440 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.440 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.440 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.440 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.440 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.440 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.440 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.440 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.440 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.440 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.440 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.440 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.440 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.440 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.440 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.440 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.440 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.440 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.440 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.440 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.440 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.440 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.440 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.440 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.440 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.440 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.440 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.440 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.440 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.440 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.440 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.440 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.440 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.440 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.440 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.440 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.440 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.440 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.440 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.440 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.440 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.440 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.440 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.440 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.440 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.440 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.440 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.440 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.440 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.440 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.440 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.440 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.440 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.440 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.440 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.440 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.440 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.440 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.440 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.440 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.440 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.440 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.440 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.440 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.440 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.440 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.440 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.440 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.440 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.440 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.440 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.441 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.441 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.441 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.441 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.441 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.441 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.441 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.441 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.441 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.441 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.441 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.441 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.441 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.441 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.441 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.441 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.441 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.441 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.441 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.441 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.441 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.441 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.441 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.441 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.441 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.441 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.441 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.441 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.441 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.441 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.441 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.441 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.441 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.441 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.441 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.441 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.441 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.441 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.441 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.441 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.441 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.441 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.441 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.441 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.441 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.441 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.441 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.441 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.441 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.441 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.441 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.441 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.441 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.441 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.441 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.441 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.441 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.441 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:16.441 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:16.441 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:16.441 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:16.441 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:16.441 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:16.441 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:16.441 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:16.441 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:16.441 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:16.441 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:16.441 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:16.441 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:16.441 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.441 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.441 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7808800 kB' 'MemAvailable: 9523252 kB' 'Buffers: 2436 kB' 'Cached: 1925520 kB' 'SwapCached: 0 kB' 'Active: 492756 kB' 'Inactive: 1556084 kB' 'Active(anon): 131348 kB' 'Inactive(anon): 0 kB' 'Active(file): 361408 kB' 'Inactive(file): 1556084 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 122724 kB' 'Mapped: 48960 kB' 'Shmem: 10464 kB' 'KReclaimable: 67800 kB' 'Slab: 141272 kB' 'SReclaimable: 67800 kB' 'SUnreclaim: 73472 kB' 'KernelStack: 6384 kB' 'PageTables: 4096 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 352128 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54772 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 5085184 kB' 'DirectMap1G: 9437184 kB' 00:05:16.441 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.441 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.441 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.441 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.441 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.441 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.441 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.441 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.441 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.441 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.441 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.441 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.441 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.441 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.441 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.441 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.442 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.442 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.442 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.442 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.442 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.442 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.442 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.442 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.442 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.442 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.442 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.442 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.442 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.442 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.442 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.442 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.442 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.442 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.442 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.442 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.442 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.442 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.442 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.442 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.442 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.442 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.442 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.442 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.442 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.442 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.442 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.442 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.442 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.442 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.442 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.442 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.442 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.442 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.442 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.442 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.442 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.442 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.442 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.442 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.442 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.442 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.442 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.442 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.442 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.442 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.442 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.442 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.442 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.442 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.442 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.442 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.442 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.442 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.442 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.442 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.442 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.442 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.442 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.442 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.442 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.442 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.442 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.442 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.442 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.442 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.442 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.442 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.442 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.442 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.442 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.442 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.442 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.442 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.442 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.442 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.442 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.442 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.442 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.442 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.442 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.442 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.442 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.442 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.442 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.442 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.442 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.443 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.443 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.443 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.443 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.443 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.443 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.443 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.443 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.443 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.443 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.443 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.443 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.443 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.443 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.443 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.443 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.443 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.443 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.443 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.443 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.443 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.443 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.443 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.443 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.443 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.443 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.443 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.443 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.443 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.443 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.443 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.443 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.443 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.443 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.443 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.443 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.443 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.443 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.443 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.443 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.443 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.443 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.443 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.443 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.443 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.443 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.443 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.443 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.443 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.443 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.443 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.443 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.443 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.443 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.443 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.443 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.443 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.443 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.443 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.443 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.443 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.443 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.443 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.443 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.443 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.443 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.443 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.443 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.443 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.443 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.443 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.443 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.443 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.443 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.443 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.443 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.443 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.443 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.443 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.443 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.443 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.443 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.443 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.443 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.443 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.443 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.443 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.443 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.443 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.443 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.443 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.443 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.443 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.443 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.443 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:16.443 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:16.443 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:16.443 nr_hugepages=1024 00:05:16.443 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:16.443 resv_hugepages=0 00:05:16.443 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:16.443 surplus_hugepages=0 00:05:16.443 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:16.444 anon_hugepages=0 00:05:16.444 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:16.444 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:16.444 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:16.444 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:16.444 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:16.444 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:16.444 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:16.444 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:16.444 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:16.444 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:16.444 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:16.444 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:16.444 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:16.444 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.444 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.444 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7808800 kB' 'MemAvailable: 9523252 kB' 'Buffers: 2436 kB' 'Cached: 1925520 kB' 'SwapCached: 0 kB' 'Active: 492708 kB' 'Inactive: 1556084 kB' 'Active(anon): 131300 kB' 'Inactive(anon): 0 kB' 'Active(file): 361408 kB' 'Inactive(file): 1556084 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 122676 kB' 'Mapped: 48960 kB' 'Shmem: 10464 kB' 'KReclaimable: 67800 kB' 'Slab: 141264 kB' 'SReclaimable: 67800 kB' 'SUnreclaim: 73464 kB' 'KernelStack: 6368 kB' 'PageTables: 4052 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 352128 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54772 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 5085184 kB' 'DirectMap1G: 9437184 kB' 00:05:16.444 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.444 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.444 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.444 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.444 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.444 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.444 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.444 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.444 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.444 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.444 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.444 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.444 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.444 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.444 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.444 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.444 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.444 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.444 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.444 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.444 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.444 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.444 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.444 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.444 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.444 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.444 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.444 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.444 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.444 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.444 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.444 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.444 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.444 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.444 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.444 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.444 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.444 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.444 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.444 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.444 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.444 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.444 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.444 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.444 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.444 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.444 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.444 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.444 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.444 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.444 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.444 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.444 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.444 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.444 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.444 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.444 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.444 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.444 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.444 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.444 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.444 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.444 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.444 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.444 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.444 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.444 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.444 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.444 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.444 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.444 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.444 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.445 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.445 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.445 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.445 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.445 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.445 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.445 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.445 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.445 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.445 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.445 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.445 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.445 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.445 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.445 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.445 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.445 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.445 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.445 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.445 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.445 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.445 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.445 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.445 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.445 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.445 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.445 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.445 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.445 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.445 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.445 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.445 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.445 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.445 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.445 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.445 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.445 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.445 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.445 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.445 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.445 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.445 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.445 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.445 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.445 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.445 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.445 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.445 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.445 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.445 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.445 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.445 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.445 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.445 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.445 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.445 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.445 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.445 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.445 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.445 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.445 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.445 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.445 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.445 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.445 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.445 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.445 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.445 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.445 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.445 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.445 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.445 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.445 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.445 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.445 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.445 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.445 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.445 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.445 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.445 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.445 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.445 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.445 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.445 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.445 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.445 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.445 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.445 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.445 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.445 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.445 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.445 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.445 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.445 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.445 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.445 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.445 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.445 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.446 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.446 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.446 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.446 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.446 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.446 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.446 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.446 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.446 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.446 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.446 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.446 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.446 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.446 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.446 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.446 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.446 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.446 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.446 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.446 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.446 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.446 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.446 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.446 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:16.446 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:16.446 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:16.446 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:16.446 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:16.446 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:16.446 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:16.446 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:16.446 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:16.446 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:16.446 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:16.446 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:16.446 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:16.446 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:16.446 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:16.446 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:16.446 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:16.446 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:16.446 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:16.446 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:16.446 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:16.446 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.446 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.446 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7808296 kB' 'MemUsed: 4433676 kB' 'SwapCached: 0 kB' 'Active: 492916 kB' 'Inactive: 1556084 kB' 'Active(anon): 131508 kB' 'Inactive(anon): 0 kB' 'Active(file): 361408 kB' 'Inactive(file): 1556084 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'FilePages: 1927956 kB' 'Mapped: 48960 kB' 'AnonPages: 122624 kB' 'Shmem: 10464 kB' 'KernelStack: 6420 kB' 'PageTables: 4008 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 67800 kB' 'Slab: 141264 kB' 'SReclaimable: 67800 kB' 'SUnreclaim: 73464 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:16.446 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.446 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.446 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.446 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.446 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.446 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.446 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.446 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.446 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.446 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.446 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.446 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.446 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.446 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.446 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.446 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.446 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.446 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.446 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.446 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.446 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.446 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.446 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.446 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.446 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.446 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.446 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.446 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.446 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.446 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.446 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.446 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.446 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.446 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.446 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.446 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.446 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.446 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.446 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.446 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.446 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.446 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.446 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.446 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.447 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.447 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.447 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.447 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.447 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.447 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.447 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.447 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.447 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.447 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.447 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.447 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.447 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.447 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.447 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.447 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.447 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.447 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.447 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.447 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.447 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.447 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.447 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.447 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.447 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.447 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.447 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.447 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.447 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.447 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.447 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.447 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.447 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.447 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.447 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.447 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.447 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.447 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.447 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.447 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.447 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.447 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.447 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.447 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.447 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.447 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.447 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.447 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.447 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.447 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.447 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.447 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.447 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.447 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.447 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.447 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.447 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.447 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.447 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.447 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.447 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.447 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.447 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.447 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.447 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.447 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.447 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.447 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.447 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.447 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.447 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.447 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.447 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.447 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.447 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.447 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.447 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.447 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.447 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.447 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.447 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.447 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.447 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.447 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.447 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.447 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.447 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.447 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.447 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.447 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.447 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.447 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.447 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.447 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.447 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.447 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.447 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.448 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.448 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.448 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.448 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.448 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:16.448 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:16.448 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:16.448 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:16.448 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:16.448 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:16.448 node0=1024 expecting 1024 00:05:16.448 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:16.448 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:16.448 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:05:16.448 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:05:16.448 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:05:16.448 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:16.448 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:16.706 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:16.706 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:16.706 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:16.970 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:05:16.970 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:05:16.970 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:16.970 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:16.970 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:16.970 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:16.970 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:16.970 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:16.970 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:16.970 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:16.970 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:16.970 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:16.970 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:16.970 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:16.970 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:16.970 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:16.970 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:16.970 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:16.970 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:16.970 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.970 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.970 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7806788 kB' 'MemAvailable: 9521240 kB' 'Buffers: 2436 kB' 'Cached: 1925520 kB' 'SwapCached: 0 kB' 'Active: 493260 kB' 'Inactive: 1556084 kB' 'Active(anon): 131852 kB' 'Inactive(anon): 0 kB' 'Active(file): 361408 kB' 'Inactive(file): 1556084 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 123268 kB' 'Mapped: 49016 kB' 'Shmem: 10464 kB' 'KReclaimable: 67800 kB' 'Slab: 141288 kB' 'SReclaimable: 67800 kB' 'SUnreclaim: 73488 kB' 'KernelStack: 6496 kB' 'PageTables: 4404 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 352128 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54804 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 5085184 kB' 'DirectMap1G: 9437184 kB' 00:05:16.970 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.970 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.970 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.970 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.970 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.970 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.970 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.971 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.971 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.971 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.971 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.971 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.971 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.971 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.971 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.971 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.971 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.971 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.971 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.971 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.971 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.971 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.971 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.971 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.971 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.971 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.971 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.971 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.971 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.971 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.971 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.971 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.971 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.971 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.971 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.971 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.971 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.971 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.971 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.971 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.971 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.971 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.971 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.971 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.971 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.971 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.971 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.971 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.971 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.971 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.971 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.971 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.971 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.971 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.971 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.971 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.971 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.971 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.971 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.971 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.971 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.971 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.971 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.971 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.971 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.971 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.971 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.971 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.971 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.971 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.971 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.971 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.971 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.971 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.971 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.971 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.971 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.971 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.971 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.971 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.971 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.971 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.971 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.971 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.971 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.971 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.971 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.971 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.971 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.971 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.971 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.971 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.971 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.971 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.971 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.971 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.971 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.971 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.971 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.971 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.971 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.971 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.971 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.971 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.971 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.971 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.971 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.971 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.971 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.971 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.971 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.971 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.971 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.971 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.971 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.971 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.971 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.971 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.971 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.971 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.971 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.971 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.971 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.972 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.972 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.972 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.972 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.972 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.972 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.972 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.972 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.972 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.972 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.972 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.972 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.972 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.972 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.972 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.972 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.972 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.972 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.972 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.972 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.972 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.972 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.972 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.972 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.972 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.972 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.972 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.972 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.972 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.972 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.972 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.972 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.972 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.972 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.972 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.972 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.972 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.972 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.972 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:16.972 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:16.972 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:16.972 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:16.972 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:16.972 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:16.972 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:16.972 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:16.972 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:16.972 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:16.972 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:16.972 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:16.972 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:16.972 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.972 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.972 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7806564 kB' 'MemAvailable: 9521016 kB' 'Buffers: 2436 kB' 'Cached: 1925520 kB' 'SwapCached: 0 kB' 'Active: 493100 kB' 'Inactive: 1556084 kB' 'Active(anon): 131692 kB' 'Inactive(anon): 0 kB' 'Active(file): 361408 kB' 'Inactive(file): 1556084 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 122800 kB' 'Mapped: 48832 kB' 'Shmem: 10464 kB' 'KReclaimable: 67800 kB' 'Slab: 141308 kB' 'SReclaimable: 67800 kB' 'SUnreclaim: 73508 kB' 'KernelStack: 6432 kB' 'PageTables: 4216 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 352128 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54788 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 5085184 kB' 'DirectMap1G: 9437184 kB' 00:05:16.972 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.972 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.972 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.972 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.972 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.972 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.972 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.972 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.972 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.972 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.972 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.972 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.972 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.972 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.972 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.972 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.972 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.972 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.972 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.972 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.972 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.972 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.972 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.972 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.972 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.972 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.972 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.972 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.972 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.972 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.972 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.972 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.972 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.972 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.972 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.972 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.972 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.972 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.972 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.972 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.972 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.972 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.972 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.972 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.972 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.972 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.972 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.972 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.972 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.973 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.973 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.973 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.973 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.973 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.973 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.973 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.973 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.973 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.973 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.973 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.973 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.973 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.973 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.973 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.973 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.973 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.973 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.973 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.973 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.973 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.973 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.973 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.973 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.973 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.973 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.973 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.973 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.973 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.973 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.973 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.973 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.973 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.973 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.973 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.973 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.973 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.973 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.973 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.973 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.973 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.973 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.973 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.973 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.973 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.973 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.973 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.973 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.973 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.973 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.973 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.973 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.973 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.973 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.973 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.973 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.973 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.973 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.973 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.973 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.973 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.973 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.973 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.973 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.973 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.973 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.973 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.973 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.973 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.973 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.973 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.973 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.973 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.973 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.973 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.973 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.973 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.973 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.973 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.973 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.973 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.973 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.973 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.973 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.973 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.973 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.973 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.973 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.973 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.973 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.973 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.973 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.973 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.973 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.973 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.973 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.973 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.973 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.973 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.973 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.973 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.973 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.973 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.973 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.973 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.973 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.973 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.973 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.973 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.973 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.973 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.973 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.973 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.973 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.973 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.973 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.974 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.974 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.974 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.974 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.974 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.974 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.974 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.974 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.974 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.974 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.974 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.974 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.974 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.974 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.974 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.974 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.974 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.974 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.974 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.974 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.974 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.974 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.974 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.974 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.974 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.974 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.974 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.974 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.974 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.974 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.974 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.974 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.974 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.974 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.974 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.974 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.974 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.974 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.974 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.974 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.974 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:16.974 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:16.974 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:16.974 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:16.974 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:16.974 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:16.974 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:16.974 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:16.974 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:16.974 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:16.974 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:16.974 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:16.974 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:16.974 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.974 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7806564 kB' 'MemAvailable: 9521016 kB' 'Buffers: 2436 kB' 'Cached: 1925520 kB' 'SwapCached: 0 kB' 'Active: 492808 kB' 'Inactive: 1556084 kB' 'Active(anon): 131400 kB' 'Inactive(anon): 0 kB' 'Active(file): 361408 kB' 'Inactive(file): 1556084 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 122808 kB' 'Mapped: 48832 kB' 'Shmem: 10464 kB' 'KReclaimable: 67800 kB' 'Slab: 141304 kB' 'SReclaimable: 67800 kB' 'SUnreclaim: 73504 kB' 'KernelStack: 6432 kB' 'PageTables: 4216 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 352128 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54788 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 5085184 kB' 'DirectMap1G: 9437184 kB' 00:05:16.974 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.974 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.974 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.974 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.974 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.974 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.974 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.974 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.974 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.974 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.974 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.974 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.974 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.974 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.974 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.974 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.974 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.974 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.974 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.974 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.974 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.974 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.974 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.974 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.974 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.974 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.974 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.974 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.974 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.974 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.974 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.974 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.974 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.974 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.974 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.974 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.974 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.974 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.974 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.974 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.974 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.974 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.974 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.974 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.974 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.974 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.974 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.975 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.975 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.975 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.975 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.975 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.975 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.975 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.975 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.975 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.975 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.975 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.975 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.975 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.975 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.975 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.975 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.975 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.975 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.975 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.975 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.975 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.975 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.975 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.975 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.975 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.975 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.975 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.975 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.975 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.975 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.975 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.975 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.975 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.975 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.975 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.975 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.975 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.975 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.975 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.975 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.975 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.975 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.975 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.975 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.975 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.975 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.975 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.975 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.975 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.975 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.975 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.975 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.975 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.975 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.975 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.975 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.975 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.975 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.975 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.975 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.975 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.975 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.975 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.975 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.975 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.975 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.975 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.975 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.975 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.975 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.975 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.975 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.975 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.975 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.975 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.975 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.975 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.975 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.975 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.975 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.975 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.975 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.975 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.975 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.975 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.975 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.975 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.975 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.975 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.975 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.975 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.975 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.975 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.976 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.976 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.976 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.976 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.976 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.976 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.976 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.976 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.976 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.976 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.976 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.976 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.976 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.976 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.976 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.976 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.976 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.976 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.976 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.976 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.976 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.976 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.976 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.976 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.976 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.976 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.976 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.976 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.976 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.976 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.976 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.976 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.976 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.976 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.976 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.976 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.976 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.976 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.976 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.976 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.976 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.976 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.976 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.976 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.976 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.976 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.976 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.976 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.976 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.976 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.976 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.976 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.976 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.976 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.976 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.976 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.976 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.976 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.976 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.976 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.976 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.976 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.976 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:16.976 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:16.976 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:16.976 nr_hugepages=1024 00:05:16.976 resv_hugepages=0 00:05:16.976 surplus_hugepages=0 00:05:16.976 anon_hugepages=0 00:05:16.976 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:16.976 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:16.976 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:16.976 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:16.976 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:16.976 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:16.976 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:16.976 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:16.976 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:16.976 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:16.976 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:16.976 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:16.976 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:16.976 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:16.976 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:16.976 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:16.976 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.976 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7806564 kB' 'MemAvailable: 9521016 kB' 'Buffers: 2436 kB' 'Cached: 1925520 kB' 'SwapCached: 0 kB' 'Active: 492752 kB' 'Inactive: 1556084 kB' 'Active(anon): 131344 kB' 'Inactive(anon): 0 kB' 'Active(file): 361408 kB' 'Inactive(file): 1556084 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 122712 kB' 'Mapped: 48832 kB' 'Shmem: 10464 kB' 'KReclaimable: 67800 kB' 'Slab: 141304 kB' 'SReclaimable: 67800 kB' 'SUnreclaim: 73504 kB' 'KernelStack: 6416 kB' 'PageTables: 4168 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 352128 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54788 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 5085184 kB' 'DirectMap1G: 9437184 kB' 00:05:16.976 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.976 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.976 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.976 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.976 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.976 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.976 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.976 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.976 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.976 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.976 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.976 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.976 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.976 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.976 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.976 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.976 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.977 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.977 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.977 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.977 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.977 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.977 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.977 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.977 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.977 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.977 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.977 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.977 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.977 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.977 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.977 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.977 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.977 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.977 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.977 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.977 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.977 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.977 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.977 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.977 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.977 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.977 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.977 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.977 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.977 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.977 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.977 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.977 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.977 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.977 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.977 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.977 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.977 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.977 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.977 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.977 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.977 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.977 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.977 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.977 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.977 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.977 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.977 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.977 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.977 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.977 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.977 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.977 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.977 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.977 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.977 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.977 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.977 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.977 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.977 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.977 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.977 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.977 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.977 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.977 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.977 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.977 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.977 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.977 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.977 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.977 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.977 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.977 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.977 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.977 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.977 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.977 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.977 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.977 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.977 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.977 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.977 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.977 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.977 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.977 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.977 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.977 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.977 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.977 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.977 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.977 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.977 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.977 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.977 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.977 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.977 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.977 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.977 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.977 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.977 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.977 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.977 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.977 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.977 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.977 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.977 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.977 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.977 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.977 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.977 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.977 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.977 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.977 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.977 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.977 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.977 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.977 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.978 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.978 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.978 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.978 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.978 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.978 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.978 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.978 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.978 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.978 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.978 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.978 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.978 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.978 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.978 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.978 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.978 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.978 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.978 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.978 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.978 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.978 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.978 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.978 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.978 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.978 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.978 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.978 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.978 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.978 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.978 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.978 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.978 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.978 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.978 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.978 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.978 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.978 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.978 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.978 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.978 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.978 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.978 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.978 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.978 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.978 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.978 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.978 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.978 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.978 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.978 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.978 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.978 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.978 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.978 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.978 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.978 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.978 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.978 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.978 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.978 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.978 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:16.978 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:16.978 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:16.978 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:16.978 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:16.978 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:16.978 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:16.978 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:16.978 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:16.978 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:16.978 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:16.978 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:16.978 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:16.978 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:16.978 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:16.978 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:16.978 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:16.978 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:16.978 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:16.978 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:16.978 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:16.978 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.978 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7806564 kB' 'MemUsed: 4435408 kB' 'SwapCached: 0 kB' 'Active: 492752 kB' 'Inactive: 1556084 kB' 'Active(anon): 131344 kB' 'Inactive(anon): 0 kB' 'Active(file): 361408 kB' 'Inactive(file): 1556084 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'FilePages: 1927956 kB' 'Mapped: 48832 kB' 'AnonPages: 122712 kB' 'Shmem: 10464 kB' 'KernelStack: 6416 kB' 'PageTables: 4168 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 67800 kB' 'Slab: 141304 kB' 'SReclaimable: 67800 kB' 'SUnreclaim: 73504 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:16.978 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.978 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.978 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.978 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.978 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.978 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.978 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.978 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.978 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.978 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.978 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.978 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.978 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.978 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.978 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.978 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.978 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.978 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.978 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.978 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.978 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.979 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.979 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.979 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.979 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.979 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.979 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.979 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.979 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.979 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.979 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.979 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.979 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.979 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.979 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.979 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.979 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.979 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.979 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.979 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.979 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.979 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.979 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.979 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.979 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.979 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.979 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.979 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.979 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.979 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.979 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.979 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.979 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.979 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.979 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.979 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.979 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.979 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.979 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.979 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.979 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.979 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.979 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.979 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.979 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.979 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.979 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.979 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.979 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.979 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.979 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.979 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.979 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.979 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.979 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.979 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.979 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.979 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.979 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.979 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.979 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.979 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.979 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.979 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.979 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.979 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.979 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.979 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.979 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.979 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.979 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.979 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.979 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.979 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.979 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.979 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.979 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.979 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.979 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.979 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.979 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.979 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.979 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.979 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.979 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.979 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.979 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.979 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.979 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.979 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.979 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.979 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.979 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.979 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.979 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.979 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.979 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.979 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.979 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.979 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.979 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.979 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.979 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.979 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.979 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.979 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.979 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.979 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.979 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.979 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.979 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.979 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.979 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.979 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.979 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.979 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.979 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.980 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.980 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.980 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.980 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.980 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.980 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:16.980 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.980 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.980 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.980 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:16.980 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:16.980 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:16.980 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:16.980 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:16.980 node0=1024 expecting 1024 00:05:16.980 ************************************ 00:05:16.980 END TEST no_shrink_alloc 00:05:16.980 ************************************ 00:05:16.980 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:16.980 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:16.980 18:12:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:16.980 00:05:16.980 real 0m1.099s 00:05:16.980 user 0m0.532s 00:05:16.980 sys 0m0.608s 00:05:16.980 18:12:28 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:16.980 18:12:28 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:16.980 18:12:28 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:16.980 18:12:28 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:05:16.980 18:12:28 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:05:16.980 18:12:28 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:16.980 18:12:28 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:16.980 18:12:28 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:16.980 18:12:28 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:16.980 18:12:28 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:16.980 18:12:28 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:16.980 18:12:28 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:16.980 00:05:16.980 real 0m4.839s 00:05:16.980 user 0m2.258s 00:05:16.980 sys 0m2.605s 00:05:16.980 ************************************ 00:05:16.980 END TEST hugepages 00:05:16.980 ************************************ 00:05:16.980 18:12:28 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:16.980 18:12:28 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:17.239 18:12:29 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:05:17.239 18:12:29 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:17.239 18:12:29 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:17.239 18:12:29 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:17.239 18:12:29 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:17.239 ************************************ 00:05:17.239 START TEST driver 00:05:17.239 ************************************ 00:05:17.239 18:12:29 setup.sh.driver -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:17.239 * Looking for test storage... 00:05:17.239 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:17.239 18:12:29 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:05:17.239 18:12:29 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:17.239 18:12:29 setup.sh.driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:17.806 18:12:29 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:05:17.806 18:12:29 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:17.806 18:12:29 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:17.806 18:12:29 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:17.806 ************************************ 00:05:17.806 START TEST guess_driver 00:05:17.806 ************************************ 00:05:17.806 18:12:29 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:05:17.806 18:12:29 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:05:17.806 18:12:29 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:05:17.806 18:12:29 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:05:17.806 18:12:29 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:05:17.806 18:12:29 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:05:17.806 18:12:29 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:05:17.806 18:12:29 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:05:17.806 18:12:29 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:05:17.806 18:12:29 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:05:17.806 18:12:29 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:05:17.806 18:12:29 setup.sh.driver.guess_driver -- setup/driver.sh@32 -- # return 1 00:05:17.806 18:12:29 setup.sh.driver.guess_driver -- setup/driver.sh@38 -- # uio 00:05:17.806 18:12:29 setup.sh.driver.guess_driver -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:05:17.806 18:12:29 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod uio_pci_generic 00:05:17.806 18:12:29 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep uio_pci_generic 00:05:17.806 18:12:29 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:05:17.806 18:12:29 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:05:17.806 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:05:17.806 18:12:29 setup.sh.driver.guess_driver -- setup/driver.sh@39 -- # echo uio_pci_generic 00:05:17.806 18:12:29 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:05:17.806 18:12:29 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:05:17.806 Looking for driver=uio_pci_generic 00:05:17.806 18:12:29 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:05:17.806 18:12:29 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:17.806 18:12:29 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:05:17.806 18:12:29 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:05:17.806 18:12:29 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:18.740 18:12:30 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:05:18.740 18:12:30 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # continue 00:05:18.740 18:12:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:18.740 18:12:30 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:18.740 18:12:30 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:18.740 18:12:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:18.740 18:12:30 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:18.740 18:12:30 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:18.740 18:12:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:18.740 18:12:30 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:05:18.740 18:12:30 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:05:18.740 18:12:30 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:18.740 18:12:30 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:19.306 00:05:19.306 real 0m1.486s 00:05:19.306 user 0m0.552s 00:05:19.306 sys 0m0.954s 00:05:19.306 18:12:31 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:19.306 ************************************ 00:05:19.306 END TEST guess_driver 00:05:19.306 ************************************ 00:05:19.306 18:12:31 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:05:19.306 18:12:31 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:05:19.306 ************************************ 00:05:19.306 END TEST driver 00:05:19.306 ************************************ 00:05:19.306 00:05:19.306 real 0m2.205s 00:05:19.306 user 0m0.800s 00:05:19.306 sys 0m1.459s 00:05:19.306 18:12:31 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:19.306 18:12:31 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:19.306 18:12:31 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:05:19.306 18:12:31 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:19.306 18:12:31 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:19.306 18:12:31 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:19.306 18:12:31 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:19.306 ************************************ 00:05:19.306 START TEST devices 00:05:19.306 ************************************ 00:05:19.306 18:12:31 setup.sh.devices -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:19.565 * Looking for test storage... 00:05:19.565 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:19.565 18:12:31 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:05:19.565 18:12:31 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:05:19.565 18:12:31 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:19.565 18:12:31 setup.sh.devices -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:20.131 18:12:32 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:05:20.131 18:12:32 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:05:20.131 18:12:32 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:05:20.131 18:12:32 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:05:20.131 18:12:32 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:20.131 18:12:32 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:05:20.131 18:12:32 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:05:20.131 18:12:32 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:20.131 18:12:32 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:20.131 18:12:32 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:20.131 18:12:32 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n2 00:05:20.131 18:12:32 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:05:20.131 18:12:32 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:05:20.131 18:12:32 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:20.131 18:12:32 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:20.131 18:12:32 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n3 00:05:20.131 18:12:32 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:05:20.131 18:12:32 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:05:20.131 18:12:32 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:20.131 18:12:32 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:20.131 18:12:32 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:05:20.131 18:12:32 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:05:20.131 18:12:32 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:20.131 18:12:32 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:20.131 18:12:32 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:05:20.131 18:12:32 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:05:20.131 18:12:32 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:20.131 18:12:32 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:20.131 18:12:32 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:20.131 18:12:32 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:20.131 18:12:32 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:20.131 18:12:32 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:20.131 18:12:32 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:05:20.131 18:12:32 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:05:20.131 18:12:32 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:20.131 18:12:32 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:05:20.131 18:12:32 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:05:20.390 No valid GPT data, bailing 00:05:20.390 18:12:32 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:20.390 18:12:32 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:20.390 18:12:32 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:20.390 18:12:32 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:20.390 18:12:32 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:20.390 18:12:32 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:20.390 18:12:32 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:05:20.390 18:12:32 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:20.390 18:12:32 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:20.390 18:12:32 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:05:20.390 18:12:32 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:20.390 18:12:32 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n2 00:05:20.390 18:12:32 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:20.390 18:12:32 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:05:20.390 18:12:32 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:05:20.390 18:12:32 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n2 00:05:20.390 18:12:32 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:05:20.390 18:12:32 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:05:20.390 No valid GPT data, bailing 00:05:20.390 18:12:32 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:05:20.390 18:12:32 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:20.390 18:12:32 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:20.390 18:12:32 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n2 00:05:20.390 18:12:32 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n2 00:05:20.390 18:12:32 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n2 ]] 00:05:20.390 18:12:32 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:05:20.390 18:12:32 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:20.390 18:12:32 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:20.390 18:12:32 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:05:20.390 18:12:32 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:20.390 18:12:32 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n3 00:05:20.390 18:12:32 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:20.390 18:12:32 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:05:20.390 18:12:32 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:05:20.390 18:12:32 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n3 00:05:20.390 18:12:32 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:05:20.390 18:12:32 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:05:20.390 No valid GPT data, bailing 00:05:20.390 18:12:32 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:05:20.390 18:12:32 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:20.390 18:12:32 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:20.390 18:12:32 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n3 00:05:20.390 18:12:32 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n3 00:05:20.390 18:12:32 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n3 ]] 00:05:20.390 18:12:32 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:05:20.390 18:12:32 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:20.390 18:12:32 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:20.390 18:12:32 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:05:20.390 18:12:32 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:20.390 18:12:32 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:05:20.390 18:12:32 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1 00:05:20.390 18:12:32 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:05:20.390 18:12:32 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:05:20.390 18:12:32 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:05:20.390 18:12:32 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:05:20.390 18:12:32 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:05:20.648 No valid GPT data, bailing 00:05:20.648 18:12:32 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:20.648 18:12:32 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:20.648 18:12:32 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:20.648 18:12:32 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:05:20.648 18:12:32 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme1n1 00:05:20.648 18:12:32 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:05:20.648 18:12:32 setup.sh.devices -- setup/common.sh@80 -- # echo 5368709120 00:05:20.648 18:12:32 setup.sh.devices -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:05:20.648 18:12:32 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:20.648 18:12:32 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:05:20.648 18:12:32 setup.sh.devices -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:05:20.648 18:12:32 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:05:20.648 18:12:32 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:20.648 18:12:32 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:20.648 18:12:32 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:20.648 18:12:32 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:20.648 ************************************ 00:05:20.648 START TEST nvme_mount 00:05:20.648 ************************************ 00:05:20.648 18:12:32 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:05:20.648 18:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:05:20.648 18:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:05:20.648 18:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:20.648 18:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:20.648 18:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:05:20.648 18:12:32 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:20.648 18:12:32 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:05:20.648 18:12:32 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:20.648 18:12:32 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:20.648 18:12:32 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:05:20.648 18:12:32 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:05:20.648 18:12:32 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:20.648 18:12:32 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:20.648 18:12:32 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:20.648 18:12:32 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:20.648 18:12:32 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:20.648 18:12:32 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:20.648 18:12:32 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:20.648 18:12:32 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:05:21.581 Creating new GPT entries in memory. 00:05:21.581 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:21.581 other utilities. 00:05:21.581 18:12:33 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:21.581 18:12:33 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:21.581 18:12:33 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:21.581 18:12:33 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:21.581 18:12:33 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:22.955 Creating new GPT entries in memory. 00:05:22.955 The operation has completed successfully. 00:05:22.955 18:12:34 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:22.955 18:12:34 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:22.955 18:12:34 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 59615 00:05:22.955 18:12:34 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:22.955 18:12:34 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:05:22.955 18:12:34 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:22.955 18:12:34 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:22.955 18:12:34 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:22.955 18:12:34 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:22.955 18:12:34 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:00:11.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:22.955 18:12:34 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:22.955 18:12:34 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:22.955 18:12:34 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:22.955 18:12:34 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:22.955 18:12:34 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:22.955 18:12:34 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:22.955 18:12:34 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:22.955 18:12:34 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:22.955 18:12:34 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:22.955 18:12:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.955 18:12:34 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:22.955 18:12:34 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:22.955 18:12:34 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:22.955 18:12:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:22.955 18:12:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:22.955 18:12:34 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:22.955 18:12:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.955 18:12:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:22.955 18:12:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.220 18:12:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:23.220 18:12:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.220 18:12:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:23.220 18:12:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.220 18:12:35 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:23.220 18:12:35 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:23.220 18:12:35 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:23.220 18:12:35 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:23.220 18:12:35 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:23.221 18:12:35 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:05:23.221 18:12:35 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:23.221 18:12:35 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:23.221 18:12:35 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:23.221 18:12:35 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:23.221 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:23.221 18:12:35 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:23.221 18:12:35 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:23.787 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:23.787 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:23.787 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:23.787 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:23.787 18:12:35 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:05:23.787 18:12:35 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:05:23.787 18:12:35 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:23.787 18:12:35 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:23.787 18:12:35 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:23.788 18:12:35 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:23.788 18:12:35 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:00:11.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:23.788 18:12:35 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:23.788 18:12:35 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:23.788 18:12:35 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:23.788 18:12:35 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:23.788 18:12:35 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:23.788 18:12:35 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:23.788 18:12:35 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:23.788 18:12:35 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:23.788 18:12:35 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:23.788 18:12:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.788 18:12:35 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:23.788 18:12:35 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:23.788 18:12:35 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:23.788 18:12:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:23.788 18:12:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:23.788 18:12:35 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:23.788 18:12:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.788 18:12:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:23.788 18:12:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.045 18:12:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:24.045 18:12:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.045 18:12:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:24.045 18:12:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.045 18:12:35 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:24.045 18:12:35 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:24.045 18:12:35 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:24.045 18:12:35 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:24.045 18:12:35 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:24.045 18:12:35 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:24.045 18:12:35 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:00:11.0 data@nvme0n1 '' '' 00:05:24.045 18:12:35 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:24.045 18:12:35 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:24.045 18:12:35 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:24.045 18:12:35 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:05:24.045 18:12:35 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:24.045 18:12:35 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:24.045 18:12:35 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:24.045 18:12:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.045 18:12:35 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:24.045 18:12:35 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:24.046 18:12:35 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:24.046 18:12:35 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:24.303 18:12:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:24.303 18:12:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:24.303 18:12:36 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:24.303 18:12:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.303 18:12:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:24.303 18:12:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.561 18:12:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:24.561 18:12:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.561 18:12:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:24.561 18:12:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.561 18:12:36 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:24.561 18:12:36 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:24.561 18:12:36 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:05:24.561 18:12:36 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:05:24.561 18:12:36 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:24.561 18:12:36 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:24.561 18:12:36 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:24.561 18:12:36 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:24.561 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:24.561 00:05:24.561 real 0m3.983s 00:05:24.561 user 0m0.693s 00:05:24.561 sys 0m0.928s 00:05:24.561 18:12:36 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:24.561 18:12:36 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:05:24.561 ************************************ 00:05:24.561 END TEST nvme_mount 00:05:24.561 ************************************ 00:05:24.561 18:12:36 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:05:24.561 18:12:36 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:24.561 18:12:36 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:24.561 18:12:36 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:24.561 18:12:36 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:24.561 ************************************ 00:05:24.561 START TEST dm_mount 00:05:24.561 ************************************ 00:05:24.561 18:12:36 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:05:24.561 18:12:36 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:24.561 18:12:36 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:24.561 18:12:36 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:24.561 18:12:36 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:24.561 18:12:36 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:24.561 18:12:36 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:05:24.561 18:12:36 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:24.561 18:12:36 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:24.561 18:12:36 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:05:24.561 18:12:36 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:05:24.561 18:12:36 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:24.561 18:12:36 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:24.561 18:12:36 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:24.561 18:12:36 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:24.561 18:12:36 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:24.561 18:12:36 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:24.561 18:12:36 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:24.561 18:12:36 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:24.561 18:12:36 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:24.561 18:12:36 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:24.561 18:12:36 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:25.496 Creating new GPT entries in memory. 00:05:25.497 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:25.497 other utilities. 00:05:25.497 18:12:37 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:25.497 18:12:37 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:25.497 18:12:37 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:25.497 18:12:37 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:25.497 18:12:37 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:26.872 Creating new GPT entries in memory. 00:05:26.872 The operation has completed successfully. 00:05:26.872 18:12:38 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:26.872 18:12:38 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:26.872 18:12:38 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:26.872 18:12:38 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:26.872 18:12:38 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:05:27.806 The operation has completed successfully. 00:05:27.806 18:12:39 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:27.806 18:12:39 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:27.806 18:12:39 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 60051 00:05:27.806 18:12:39 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:27.806 18:12:39 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:27.806 18:12:39 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:27.806 18:12:39 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:27.806 18:12:39 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:05:27.806 18:12:39 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:27.806 18:12:39 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:05:27.806 18:12:39 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:27.806 18:12:39 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:27.806 18:12:39 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:27.807 18:12:39 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:05:27.807 18:12:39 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:27.807 18:12:39 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:27.807 18:12:39 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:27.807 18:12:39 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:05:27.807 18:12:39 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:27.807 18:12:39 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:27.807 18:12:39 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:27.807 18:12:39 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:27.807 18:12:39 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:00:11.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:27.807 18:12:39 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:27.807 18:12:39 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:27.807 18:12:39 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:27.807 18:12:39 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:27.807 18:12:39 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:27.807 18:12:39 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:27.807 18:12:39 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:05:27.807 18:12:39 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:27.807 18:12:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.807 18:12:39 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:27.807 18:12:39 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:27.807 18:12:39 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:27.807 18:12:39 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:28.065 18:12:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:28.065 18:12:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:28.065 18:12:39 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:28.065 18:12:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.065 18:12:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:28.065 18:12:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.065 18:12:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:28.065 18:12:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.323 18:12:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:28.323 18:12:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.323 18:12:40 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:28.323 18:12:40 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:05:28.323 18:12:40 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:28.323 18:12:40 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:28.323 18:12:40 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:28.323 18:12:40 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:28.323 18:12:40 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:00:11.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:28.323 18:12:40 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:28.323 18:12:40 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:28.323 18:12:40 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:28.323 18:12:40 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:05:28.323 18:12:40 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:28.323 18:12:40 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:28.323 18:12:40 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:28.323 18:12:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.323 18:12:40 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:28.323 18:12:40 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:28.323 18:12:40 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:28.323 18:12:40 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:28.582 18:12:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:28.582 18:12:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:28.582 18:12:40 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:28.582 18:12:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.582 18:12:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:28.582 18:12:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.582 18:12:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:28.582 18:12:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.841 18:12:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:28.841 18:12:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.841 18:12:40 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:28.841 18:12:40 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:28.841 18:12:40 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:05:28.841 18:12:40 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:05:28.841 18:12:40 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:28.841 18:12:40 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:28.841 18:12:40 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:28.841 18:12:40 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:28.841 18:12:40 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:28.841 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:28.841 18:12:40 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:28.841 18:12:40 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:28.841 00:05:28.841 real 0m4.302s 00:05:28.841 user 0m0.483s 00:05:28.841 sys 0m0.766s 00:05:28.841 18:12:40 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:28.841 ************************************ 00:05:28.841 END TEST dm_mount 00:05:28.841 ************************************ 00:05:28.841 18:12:40 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:05:28.841 18:12:40 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:05:28.841 18:12:40 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:05:28.841 18:12:40 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:05:28.841 18:12:40 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:28.841 18:12:40 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:28.841 18:12:40 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:28.841 18:12:40 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:28.841 18:12:40 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:29.100 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:29.100 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:29.100 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:29.100 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:29.100 18:12:41 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:05:29.100 18:12:41 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:29.359 18:12:41 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:29.359 18:12:41 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:29.359 18:12:41 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:29.359 18:12:41 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:29.359 18:12:41 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:29.360 00:05:29.360 real 0m9.857s 00:05:29.360 user 0m1.839s 00:05:29.360 sys 0m2.318s 00:05:29.360 18:12:41 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:29.360 18:12:41 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:29.360 ************************************ 00:05:29.360 END TEST devices 00:05:29.360 ************************************ 00:05:29.360 18:12:41 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:05:29.360 00:05:29.360 real 0m22.162s 00:05:29.360 user 0m7.112s 00:05:29.360 sys 0m9.360s 00:05:29.360 18:12:41 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:29.360 18:12:41 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:29.360 ************************************ 00:05:29.360 END TEST setup.sh 00:05:29.360 ************************************ 00:05:29.360 18:12:41 -- common/autotest_common.sh@1142 -- # return 0 00:05:29.360 18:12:41 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:29.927 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:29.927 Hugepages 00:05:29.927 node hugesize free / total 00:05:29.927 node0 1048576kB 0 / 0 00:05:29.927 node0 2048kB 2048 / 2048 00:05:29.927 00:05:29.927 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:30.186 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:30.186 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:05:30.186 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:05:30.186 18:12:42 -- spdk/autotest.sh@130 -- # uname -s 00:05:30.186 18:12:42 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:05:30.186 18:12:42 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:05:30.186 18:12:42 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:31.122 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:31.122 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:31.122 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:31.122 18:12:43 -- common/autotest_common.sh@1532 -- # sleep 1 00:05:32.093 18:12:44 -- common/autotest_common.sh@1533 -- # bdfs=() 00:05:32.093 18:12:44 -- common/autotest_common.sh@1533 -- # local bdfs 00:05:32.093 18:12:44 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:05:32.093 18:12:44 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:05:32.093 18:12:44 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:32.093 18:12:44 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:32.093 18:12:44 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:32.093 18:12:44 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:32.093 18:12:44 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:32.351 18:12:44 -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:05:32.351 18:12:44 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:32.351 18:12:44 -- common/autotest_common.sh@1536 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:32.610 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:32.610 Waiting for block devices as requested 00:05:32.610 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:32.610 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:32.869 18:12:44 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:05:32.869 18:12:44 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:32.869 18:12:44 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:32.869 18:12:44 -- common/autotest_common.sh@1502 -- # grep 0000:00:10.0/nvme/nvme 00:05:32.869 18:12:44 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:32.869 18:12:44 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:05:32.869 18:12:44 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:32.869 18:12:44 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme1 00:05:32.869 18:12:44 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme1 00:05:32.869 18:12:44 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme1 ]] 00:05:32.869 18:12:44 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme1 00:05:32.869 18:12:44 -- common/autotest_common.sh@1545 -- # grep oacs 00:05:32.869 18:12:44 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:05:32.869 18:12:44 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:05:32.869 18:12:44 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:05:32.869 18:12:44 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:05:32.869 18:12:44 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme1 00:05:32.869 18:12:44 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:05:32.869 18:12:44 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:05:32.869 18:12:44 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:05:32.869 18:12:44 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:05:32.869 18:12:44 -- common/autotest_common.sh@1557 -- # continue 00:05:32.869 18:12:44 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:05:32.869 18:12:44 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:05:32.869 18:12:44 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:32.869 18:12:44 -- common/autotest_common.sh@1502 -- # grep 0000:00:11.0/nvme/nvme 00:05:32.869 18:12:44 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:32.869 18:12:44 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:05:32.869 18:12:44 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:32.869 18:12:44 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:05:32.869 18:12:44 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:05:32.869 18:12:44 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:05:32.869 18:12:44 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:05:32.869 18:12:44 -- common/autotest_common.sh@1545 -- # grep oacs 00:05:32.869 18:12:44 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:05:32.869 18:12:44 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:05:32.869 18:12:44 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:05:32.869 18:12:44 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:05:32.869 18:12:44 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:05:32.869 18:12:44 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:05:32.869 18:12:44 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:05:32.869 18:12:44 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:05:32.869 18:12:44 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:05:32.869 18:12:44 -- common/autotest_common.sh@1557 -- # continue 00:05:32.869 18:12:44 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:32.869 18:12:44 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:32.869 18:12:44 -- common/autotest_common.sh@10 -- # set +x 00:05:32.869 18:12:44 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:32.869 18:12:44 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:32.869 18:12:44 -- common/autotest_common.sh@10 -- # set +x 00:05:32.869 18:12:44 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:33.839 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:33.839 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:33.839 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:33.839 18:12:45 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:33.839 18:12:45 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:33.839 18:12:45 -- common/autotest_common.sh@10 -- # set +x 00:05:33.839 18:12:45 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:33.839 18:12:45 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:05:33.839 18:12:45 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:05:33.839 18:12:45 -- common/autotest_common.sh@1577 -- # bdfs=() 00:05:33.839 18:12:45 -- common/autotest_common.sh@1577 -- # local bdfs 00:05:33.839 18:12:45 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:05:33.839 18:12:45 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:33.839 18:12:45 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:33.839 18:12:45 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:33.839 18:12:45 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:33.839 18:12:45 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:33.839 18:12:45 -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:05:33.839 18:12:45 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:33.839 18:12:45 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:05:33.839 18:12:45 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:33.839 18:12:45 -- common/autotest_common.sh@1580 -- # device=0x0010 00:05:33.839 18:12:45 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:33.839 18:12:45 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:05:33.839 18:12:45 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:33.839 18:12:45 -- common/autotest_common.sh@1580 -- # device=0x0010 00:05:33.839 18:12:45 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:33.839 18:12:45 -- common/autotest_common.sh@1586 -- # printf '%s\n' 00:05:33.839 18:12:45 -- common/autotest_common.sh@1592 -- # [[ -z '' ]] 00:05:33.839 18:12:45 -- common/autotest_common.sh@1593 -- # return 0 00:05:33.839 18:12:45 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:33.839 18:12:45 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:33.839 18:12:45 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:33.839 18:12:45 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:33.839 18:12:45 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:33.839 18:12:45 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:33.839 18:12:45 -- common/autotest_common.sh@10 -- # set +x 00:05:33.839 18:12:45 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:05:33.839 18:12:45 -- spdk/autotest.sh@168 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:33.839 18:12:45 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:33.839 18:12:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:33.839 18:12:45 -- common/autotest_common.sh@10 -- # set +x 00:05:33.839 ************************************ 00:05:33.839 START TEST env 00:05:33.839 ************************************ 00:05:33.839 18:12:45 env -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:34.097 * Looking for test storage... 00:05:34.097 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:34.097 18:12:45 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:34.097 18:12:45 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:34.097 18:12:45 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:34.097 18:12:45 env -- common/autotest_common.sh@10 -- # set +x 00:05:34.097 ************************************ 00:05:34.097 START TEST env_memory 00:05:34.097 ************************************ 00:05:34.097 18:12:45 env.env_memory -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:34.097 00:05:34.097 00:05:34.097 CUnit - A unit testing framework for C - Version 2.1-3 00:05:34.097 http://cunit.sourceforge.net/ 00:05:34.097 00:05:34.097 00:05:34.097 Suite: memory 00:05:34.097 Test: alloc and free memory map ...[2024-07-22 18:12:46.020571] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:34.097 passed 00:05:34.097 Test: mem map translation ...[2024-07-22 18:12:46.092500] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:34.097 [2024-07-22 18:12:46.092590] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:34.097 [2024-07-22 18:12:46.092700] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:34.097 [2024-07-22 18:12:46.092735] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:34.355 passed 00:05:34.355 Test: mem map registration ...[2024-07-22 18:12:46.211492] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:34.355 [2024-07-22 18:12:46.211583] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:34.355 passed 00:05:34.355 Test: mem map adjacent registrations ...passed 00:05:34.355 00:05:34.355 Run Summary: Type Total Ran Passed Failed Inactive 00:05:34.355 suites 1 1 n/a 0 0 00:05:34.355 tests 4 4 4 0 0 00:05:34.355 asserts 152 152 152 0 n/a 00:05:34.355 00:05:34.355 Elapsed time = 0.405 seconds 00:05:34.613 00:05:34.613 real 0m0.453s 00:05:34.614 user 0m0.404s 00:05:34.614 sys 0m0.038s 00:05:34.614 18:12:46 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:34.614 ************************************ 00:05:34.614 END TEST env_memory 00:05:34.614 18:12:46 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:34.614 ************************************ 00:05:34.614 18:12:46 env -- common/autotest_common.sh@1142 -- # return 0 00:05:34.614 18:12:46 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:34.614 18:12:46 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:34.614 18:12:46 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:34.614 18:12:46 env -- common/autotest_common.sh@10 -- # set +x 00:05:34.614 ************************************ 00:05:34.614 START TEST env_vtophys 00:05:34.614 ************************************ 00:05:34.614 18:12:46 env.env_vtophys -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:34.614 EAL: lib.eal log level changed from notice to debug 00:05:34.614 EAL: Detected lcore 0 as core 0 on socket 0 00:05:34.614 EAL: Detected lcore 1 as core 0 on socket 0 00:05:34.614 EAL: Detected lcore 2 as core 0 on socket 0 00:05:34.614 EAL: Detected lcore 3 as core 0 on socket 0 00:05:34.614 EAL: Detected lcore 4 as core 0 on socket 0 00:05:34.614 EAL: Detected lcore 5 as core 0 on socket 0 00:05:34.614 EAL: Detected lcore 6 as core 0 on socket 0 00:05:34.614 EAL: Detected lcore 7 as core 0 on socket 0 00:05:34.614 EAL: Detected lcore 8 as core 0 on socket 0 00:05:34.614 EAL: Detected lcore 9 as core 0 on socket 0 00:05:34.614 EAL: Maximum logical cores by configuration: 128 00:05:34.614 EAL: Detected CPU lcores: 10 00:05:34.614 EAL: Detected NUMA nodes: 1 00:05:34.614 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:34.614 EAL: Detected shared linkage of DPDK 00:05:34.614 EAL: No shared files mode enabled, IPC will be disabled 00:05:34.614 EAL: Selected IOVA mode 'PA' 00:05:34.614 EAL: Probing VFIO support... 00:05:34.614 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:34.614 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:34.614 EAL: Ask a virtual area of 0x2e000 bytes 00:05:34.614 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:34.614 EAL: Setting up physically contiguous memory... 00:05:34.614 EAL: Setting maximum number of open files to 524288 00:05:34.614 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:34.614 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:34.614 EAL: Ask a virtual area of 0x61000 bytes 00:05:34.614 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:34.614 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:34.614 EAL: Ask a virtual area of 0x400000000 bytes 00:05:34.614 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:34.614 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:34.614 EAL: Ask a virtual area of 0x61000 bytes 00:05:34.614 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:34.614 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:34.614 EAL: Ask a virtual area of 0x400000000 bytes 00:05:34.614 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:34.614 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:34.614 EAL: Ask a virtual area of 0x61000 bytes 00:05:34.614 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:34.614 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:34.614 EAL: Ask a virtual area of 0x400000000 bytes 00:05:34.614 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:34.614 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:34.614 EAL: Ask a virtual area of 0x61000 bytes 00:05:34.614 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:34.614 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:34.614 EAL: Ask a virtual area of 0x400000000 bytes 00:05:34.614 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:34.614 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:34.614 EAL: Hugepages will be freed exactly as allocated. 00:05:34.614 EAL: No shared files mode enabled, IPC is disabled 00:05:34.614 EAL: No shared files mode enabled, IPC is disabled 00:05:34.872 EAL: TSC frequency is ~2200000 KHz 00:05:34.872 EAL: Main lcore 0 is ready (tid=7f7239019a40;cpuset=[0]) 00:05:34.872 EAL: Trying to obtain current memory policy. 00:05:34.872 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:34.872 EAL: Restoring previous memory policy: 0 00:05:34.872 EAL: request: mp_malloc_sync 00:05:34.872 EAL: No shared files mode enabled, IPC is disabled 00:05:34.872 EAL: Heap on socket 0 was expanded by 2MB 00:05:34.872 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:34.872 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:34.872 EAL: Mem event callback 'spdk:(nil)' registered 00:05:34.872 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:34.872 00:05:34.872 00:05:34.872 CUnit - A unit testing framework for C - Version 2.1-3 00:05:34.872 http://cunit.sourceforge.net/ 00:05:34.872 00:05:34.872 00:05:34.872 Suite: components_suite 00:05:35.439 Test: vtophys_malloc_test ...passed 00:05:35.439 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:35.439 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:35.439 EAL: Restoring previous memory policy: 4 00:05:35.439 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.439 EAL: request: mp_malloc_sync 00:05:35.439 EAL: No shared files mode enabled, IPC is disabled 00:05:35.439 EAL: Heap on socket 0 was expanded by 4MB 00:05:35.439 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.439 EAL: request: mp_malloc_sync 00:05:35.439 EAL: No shared files mode enabled, IPC is disabled 00:05:35.439 EAL: Heap on socket 0 was shrunk by 4MB 00:05:35.439 EAL: Trying to obtain current memory policy. 00:05:35.439 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:35.439 EAL: Restoring previous memory policy: 4 00:05:35.439 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.439 EAL: request: mp_malloc_sync 00:05:35.439 EAL: No shared files mode enabled, IPC is disabled 00:05:35.439 EAL: Heap on socket 0 was expanded by 6MB 00:05:35.439 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.439 EAL: request: mp_malloc_sync 00:05:35.439 EAL: No shared files mode enabled, IPC is disabled 00:05:35.439 EAL: Heap on socket 0 was shrunk by 6MB 00:05:35.439 EAL: Trying to obtain current memory policy. 00:05:35.439 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:35.439 EAL: Restoring previous memory policy: 4 00:05:35.439 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.439 EAL: request: mp_malloc_sync 00:05:35.439 EAL: No shared files mode enabled, IPC is disabled 00:05:35.439 EAL: Heap on socket 0 was expanded by 10MB 00:05:35.439 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.439 EAL: request: mp_malloc_sync 00:05:35.439 EAL: No shared files mode enabled, IPC is disabled 00:05:35.439 EAL: Heap on socket 0 was shrunk by 10MB 00:05:35.439 EAL: Trying to obtain current memory policy. 00:05:35.439 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:35.439 EAL: Restoring previous memory policy: 4 00:05:35.439 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.439 EAL: request: mp_malloc_sync 00:05:35.439 EAL: No shared files mode enabled, IPC is disabled 00:05:35.439 EAL: Heap on socket 0 was expanded by 18MB 00:05:35.439 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.439 EAL: request: mp_malloc_sync 00:05:35.439 EAL: No shared files mode enabled, IPC is disabled 00:05:35.439 EAL: Heap on socket 0 was shrunk by 18MB 00:05:35.439 EAL: Trying to obtain current memory policy. 00:05:35.439 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:35.439 EAL: Restoring previous memory policy: 4 00:05:35.439 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.439 EAL: request: mp_malloc_sync 00:05:35.439 EAL: No shared files mode enabled, IPC is disabled 00:05:35.439 EAL: Heap on socket 0 was expanded by 34MB 00:05:35.439 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.439 EAL: request: mp_malloc_sync 00:05:35.439 EAL: No shared files mode enabled, IPC is disabled 00:05:35.439 EAL: Heap on socket 0 was shrunk by 34MB 00:05:35.439 EAL: Trying to obtain current memory policy. 00:05:35.439 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:35.439 EAL: Restoring previous memory policy: 4 00:05:35.439 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.439 EAL: request: mp_malloc_sync 00:05:35.439 EAL: No shared files mode enabled, IPC is disabled 00:05:35.439 EAL: Heap on socket 0 was expanded by 66MB 00:05:35.698 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.698 EAL: request: mp_malloc_sync 00:05:35.698 EAL: No shared files mode enabled, IPC is disabled 00:05:35.698 EAL: Heap on socket 0 was shrunk by 66MB 00:05:35.698 EAL: Trying to obtain current memory policy. 00:05:35.698 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:35.698 EAL: Restoring previous memory policy: 4 00:05:35.698 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.698 EAL: request: mp_malloc_sync 00:05:35.698 EAL: No shared files mode enabled, IPC is disabled 00:05:35.698 EAL: Heap on socket 0 was expanded by 130MB 00:05:35.955 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.955 EAL: request: mp_malloc_sync 00:05:35.955 EAL: No shared files mode enabled, IPC is disabled 00:05:35.955 EAL: Heap on socket 0 was shrunk by 130MB 00:05:36.213 EAL: Trying to obtain current memory policy. 00:05:36.213 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:36.213 EAL: Restoring previous memory policy: 4 00:05:36.213 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.213 EAL: request: mp_malloc_sync 00:05:36.213 EAL: No shared files mode enabled, IPC is disabled 00:05:36.213 EAL: Heap on socket 0 was expanded by 258MB 00:05:36.780 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.780 EAL: request: mp_malloc_sync 00:05:36.780 EAL: No shared files mode enabled, IPC is disabled 00:05:36.780 EAL: Heap on socket 0 was shrunk by 258MB 00:05:37.038 EAL: Trying to obtain current memory policy. 00:05:37.038 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:37.296 EAL: Restoring previous memory policy: 4 00:05:37.296 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.296 EAL: request: mp_malloc_sync 00:05:37.296 EAL: No shared files mode enabled, IPC is disabled 00:05:37.296 EAL: Heap on socket 0 was expanded by 514MB 00:05:38.227 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.227 EAL: request: mp_malloc_sync 00:05:38.227 EAL: No shared files mode enabled, IPC is disabled 00:05:38.227 EAL: Heap on socket 0 was shrunk by 514MB 00:05:39.158 EAL: Trying to obtain current memory policy. 00:05:39.158 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:39.415 EAL: Restoring previous memory policy: 4 00:05:39.415 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.415 EAL: request: mp_malloc_sync 00:05:39.415 EAL: No shared files mode enabled, IPC is disabled 00:05:39.415 EAL: Heap on socket 0 was expanded by 1026MB 00:05:41.314 EAL: Calling mem event callback 'spdk:(nil)' 00:05:41.572 EAL: request: mp_malloc_sync 00:05:41.572 EAL: No shared files mode enabled, IPC is disabled 00:05:41.572 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:42.946 passed 00:05:42.946 00:05:42.946 Run Summary: Type Total Ran Passed Failed Inactive 00:05:42.946 suites 1 1 n/a 0 0 00:05:42.946 tests 2 2 2 0 0 00:05:42.946 asserts 5306 5306 5306 0 n/a 00:05:42.946 00:05:42.946 Elapsed time = 8.156 seconds 00:05:42.946 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.946 EAL: request: mp_malloc_sync 00:05:42.946 EAL: No shared files mode enabled, IPC is disabled 00:05:42.946 EAL: Heap on socket 0 was shrunk by 2MB 00:05:42.946 EAL: No shared files mode enabled, IPC is disabled 00:05:42.946 EAL: No shared files mode enabled, IPC is disabled 00:05:42.946 EAL: No shared files mode enabled, IPC is disabled 00:05:42.946 00:05:42.946 real 0m8.500s 00:05:42.946 user 0m7.270s 00:05:42.946 sys 0m1.059s 00:05:42.946 18:12:54 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:42.946 18:12:54 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:42.946 ************************************ 00:05:42.946 END TEST env_vtophys 00:05:42.946 ************************************ 00:05:43.206 18:12:54 env -- common/autotest_common.sh@1142 -- # return 0 00:05:43.206 18:12:54 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:43.206 18:12:54 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:43.206 18:12:54 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:43.206 18:12:54 env -- common/autotest_common.sh@10 -- # set +x 00:05:43.206 ************************************ 00:05:43.206 START TEST env_pci 00:05:43.206 ************************************ 00:05:43.206 18:12:54 env.env_pci -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:43.206 00:05:43.206 00:05:43.206 CUnit - A unit testing framework for C - Version 2.1-3 00:05:43.206 http://cunit.sourceforge.net/ 00:05:43.206 00:05:43.206 00:05:43.206 Suite: pci 00:05:43.206 Test: pci_hook ...[2024-07-22 18:12:55.009353] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 61324 has claimed it 00:05:43.206 passed 00:05:43.206 00:05:43.206 EAL: Cannot find device (10000:00:01.0) 00:05:43.206 EAL: Failed to attach device on primary process 00:05:43.206 Run Summary: Type Total Ran Passed Failed Inactive 00:05:43.206 suites 1 1 n/a 0 0 00:05:43.206 tests 1 1 1 0 0 00:05:43.206 asserts 25 25 25 0 n/a 00:05:43.206 00:05:43.206 Elapsed time = 0.007 seconds 00:05:43.206 00:05:43.206 real 0m0.072s 00:05:43.206 user 0m0.038s 00:05:43.206 sys 0m0.034s 00:05:43.206 18:12:55 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:43.206 18:12:55 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:43.206 ************************************ 00:05:43.206 END TEST env_pci 00:05:43.206 ************************************ 00:05:43.206 18:12:55 env -- common/autotest_common.sh@1142 -- # return 0 00:05:43.206 18:12:55 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:43.206 18:12:55 env -- env/env.sh@15 -- # uname 00:05:43.206 18:12:55 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:43.206 18:12:55 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:43.206 18:12:55 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:43.206 18:12:55 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:05:43.206 18:12:55 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:43.206 18:12:55 env -- common/autotest_common.sh@10 -- # set +x 00:05:43.206 ************************************ 00:05:43.206 START TEST env_dpdk_post_init 00:05:43.206 ************************************ 00:05:43.206 18:12:55 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:43.206 EAL: Detected CPU lcores: 10 00:05:43.206 EAL: Detected NUMA nodes: 1 00:05:43.206 EAL: Detected shared linkage of DPDK 00:05:43.206 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:43.206 EAL: Selected IOVA mode 'PA' 00:05:43.465 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:43.465 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:43.465 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:05:43.465 Starting DPDK initialization... 00:05:43.465 Starting SPDK post initialization... 00:05:43.465 SPDK NVMe probe 00:05:43.465 Attaching to 0000:00:10.0 00:05:43.465 Attaching to 0000:00:11.0 00:05:43.465 Attached to 0000:00:10.0 00:05:43.465 Attached to 0000:00:11.0 00:05:43.465 Cleaning up... 00:05:43.465 00:05:43.465 real 0m0.281s 00:05:43.465 user 0m0.077s 00:05:43.465 sys 0m0.105s 00:05:43.465 18:12:55 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:43.465 ************************************ 00:05:43.465 END TEST env_dpdk_post_init 00:05:43.465 ************************************ 00:05:43.465 18:12:55 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:43.465 18:12:55 env -- common/autotest_common.sh@1142 -- # return 0 00:05:43.465 18:12:55 env -- env/env.sh@26 -- # uname 00:05:43.465 18:12:55 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:43.465 18:12:55 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:43.465 18:12:55 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:43.465 18:12:55 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:43.465 18:12:55 env -- common/autotest_common.sh@10 -- # set +x 00:05:43.465 ************************************ 00:05:43.465 START TEST env_mem_callbacks 00:05:43.465 ************************************ 00:05:43.465 18:12:55 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:43.723 EAL: Detected CPU lcores: 10 00:05:43.723 EAL: Detected NUMA nodes: 1 00:05:43.723 EAL: Detected shared linkage of DPDK 00:05:43.723 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:43.723 EAL: Selected IOVA mode 'PA' 00:05:43.723 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:43.723 00:05:43.723 00:05:43.723 CUnit - A unit testing framework for C - Version 2.1-3 00:05:43.723 http://cunit.sourceforge.net/ 00:05:43.724 00:05:43.724 00:05:43.724 Suite: memory 00:05:43.724 Test: test ... 00:05:43.724 register 0x200000200000 2097152 00:05:43.724 malloc 3145728 00:05:43.724 register 0x200000400000 4194304 00:05:43.724 buf 0x2000004fffc0 len 3145728 PASSED 00:05:43.724 malloc 64 00:05:43.724 buf 0x2000004ffec0 len 64 PASSED 00:05:43.724 malloc 4194304 00:05:43.724 register 0x200000800000 6291456 00:05:43.724 buf 0x2000009fffc0 len 4194304 PASSED 00:05:43.724 free 0x2000004fffc0 3145728 00:05:43.724 free 0x2000004ffec0 64 00:05:43.724 unregister 0x200000400000 4194304 PASSED 00:05:43.724 free 0x2000009fffc0 4194304 00:05:43.724 unregister 0x200000800000 6291456 PASSED 00:05:43.724 malloc 8388608 00:05:43.724 register 0x200000400000 10485760 00:05:43.724 buf 0x2000005fffc0 len 8388608 PASSED 00:05:43.724 free 0x2000005fffc0 8388608 00:05:43.724 unregister 0x200000400000 10485760 PASSED 00:05:43.724 passed 00:05:43.724 00:05:43.724 Run Summary: Type Total Ran Passed Failed Inactive 00:05:43.724 suites 1 1 n/a 0 0 00:05:43.724 tests 1 1 1 0 0 00:05:43.724 asserts 15 15 15 0 n/a 00:05:43.724 00:05:43.724 Elapsed time = 0.059 seconds 00:05:43.724 00:05:43.724 real 0m0.267s 00:05:43.724 user 0m0.093s 00:05:43.724 sys 0m0.072s 00:05:43.724 18:12:55 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:43.724 18:12:55 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:43.724 ************************************ 00:05:43.724 END TEST env_mem_callbacks 00:05:43.724 ************************************ 00:05:43.982 18:12:55 env -- common/autotest_common.sh@1142 -- # return 0 00:05:43.982 00:05:43.982 real 0m9.908s 00:05:43.982 user 0m7.992s 00:05:43.982 sys 0m1.516s 00:05:43.982 18:12:55 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:43.982 18:12:55 env -- common/autotest_common.sh@10 -- # set +x 00:05:43.982 ************************************ 00:05:43.982 END TEST env 00:05:43.982 ************************************ 00:05:43.982 18:12:55 -- common/autotest_common.sh@1142 -- # return 0 00:05:43.982 18:12:55 -- spdk/autotest.sh@169 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:43.982 18:12:55 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:43.982 18:12:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:43.982 18:12:55 -- common/autotest_common.sh@10 -- # set +x 00:05:43.982 ************************************ 00:05:43.982 START TEST rpc 00:05:43.982 ************************************ 00:05:43.982 18:12:55 rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:43.982 * Looking for test storage... 00:05:43.982 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:43.982 18:12:55 rpc -- rpc/rpc.sh@65 -- # spdk_pid=61443 00:05:43.982 18:12:55 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:43.982 18:12:55 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:43.982 18:12:55 rpc -- rpc/rpc.sh@67 -- # waitforlisten 61443 00:05:43.982 18:12:55 rpc -- common/autotest_common.sh@829 -- # '[' -z 61443 ']' 00:05:43.982 18:12:55 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:43.982 18:12:55 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:43.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:43.982 18:12:55 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:43.982 18:12:55 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:43.982 18:12:55 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:44.240 [2024-07-22 18:12:56.018718] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:05:44.240 [2024-07-22 18:12:56.019466] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61443 ] 00:05:44.240 [2024-07-22 18:12:56.188894] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.497 [2024-07-22 18:12:56.428357] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:44.497 [2024-07-22 18:12:56.428422] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 61443' to capture a snapshot of events at runtime. 00:05:44.497 [2024-07-22 18:12:56.428458] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:44.497 [2024-07-22 18:12:56.428471] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:44.497 [2024-07-22 18:12:56.428484] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid61443 for offline analysis/debug. 00:05:44.497 [2024-07-22 18:12:56.428535] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.431 18:12:57 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:45.431 18:12:57 rpc -- common/autotest_common.sh@862 -- # return 0 00:05:45.431 18:12:57 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:45.431 18:12:57 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:45.431 18:12:57 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:45.431 18:12:57 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:45.431 18:12:57 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:45.431 18:12:57 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:45.431 18:12:57 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.431 ************************************ 00:05:45.431 START TEST rpc_integrity 00:05:45.431 ************************************ 00:05:45.431 18:12:57 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:45.431 18:12:57 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:45.431 18:12:57 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:45.431 18:12:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:45.431 18:12:57 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:45.431 18:12:57 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:45.431 18:12:57 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:45.431 18:12:57 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:45.431 18:12:57 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:45.431 18:12:57 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:45.431 18:12:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:45.431 18:12:57 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:45.431 18:12:57 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:45.431 18:12:57 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:45.431 18:12:57 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:45.431 18:12:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:45.431 18:12:57 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:45.431 18:12:57 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:45.431 { 00:05:45.431 "aliases": [ 00:05:45.431 "72a588e4-9aee-4944-9ebe-7ff9f9be8fd2" 00:05:45.431 ], 00:05:45.431 "assigned_rate_limits": { 00:05:45.431 "r_mbytes_per_sec": 0, 00:05:45.431 "rw_ios_per_sec": 0, 00:05:45.431 "rw_mbytes_per_sec": 0, 00:05:45.431 "w_mbytes_per_sec": 0 00:05:45.431 }, 00:05:45.431 "block_size": 512, 00:05:45.431 "claimed": false, 00:05:45.431 "driver_specific": {}, 00:05:45.431 "memory_domains": [ 00:05:45.431 { 00:05:45.431 "dma_device_id": "system", 00:05:45.431 "dma_device_type": 1 00:05:45.431 }, 00:05:45.431 { 00:05:45.431 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:45.431 "dma_device_type": 2 00:05:45.431 } 00:05:45.431 ], 00:05:45.431 "name": "Malloc0", 00:05:45.431 "num_blocks": 16384, 00:05:45.431 "product_name": "Malloc disk", 00:05:45.431 "supported_io_types": { 00:05:45.431 "abort": true, 00:05:45.431 "compare": false, 00:05:45.431 "compare_and_write": false, 00:05:45.431 "copy": true, 00:05:45.431 "flush": true, 00:05:45.431 "get_zone_info": false, 00:05:45.431 "nvme_admin": false, 00:05:45.431 "nvme_io": false, 00:05:45.431 "nvme_io_md": false, 00:05:45.431 "nvme_iov_md": false, 00:05:45.431 "read": true, 00:05:45.431 "reset": true, 00:05:45.431 "seek_data": false, 00:05:45.431 "seek_hole": false, 00:05:45.431 "unmap": true, 00:05:45.431 "write": true, 00:05:45.431 "write_zeroes": true, 00:05:45.431 "zcopy": true, 00:05:45.431 "zone_append": false, 00:05:45.431 "zone_management": false 00:05:45.431 }, 00:05:45.431 "uuid": "72a588e4-9aee-4944-9ebe-7ff9f9be8fd2", 00:05:45.431 "zoned": false 00:05:45.431 } 00:05:45.431 ]' 00:05:45.431 18:12:57 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:45.431 18:12:57 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:45.431 18:12:57 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:45.431 18:12:57 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:45.431 18:12:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:45.431 [2024-07-22 18:12:57.396253] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:45.431 [2024-07-22 18:12:57.396332] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:45.431 [2024-07-22 18:12:57.396375] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:05:45.431 [2024-07-22 18:12:57.396392] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:45.431 [2024-07-22 18:12:57.399439] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:45.431 [2024-07-22 18:12:57.399504] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:45.431 Passthru0 00:05:45.431 18:12:57 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:45.431 18:12:57 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:45.431 18:12:57 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:45.431 18:12:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:45.431 18:12:57 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:45.431 18:12:57 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:45.431 { 00:05:45.431 "aliases": [ 00:05:45.431 "72a588e4-9aee-4944-9ebe-7ff9f9be8fd2" 00:05:45.431 ], 00:05:45.431 "assigned_rate_limits": { 00:05:45.431 "r_mbytes_per_sec": 0, 00:05:45.431 "rw_ios_per_sec": 0, 00:05:45.431 "rw_mbytes_per_sec": 0, 00:05:45.431 "w_mbytes_per_sec": 0 00:05:45.431 }, 00:05:45.431 "block_size": 512, 00:05:45.431 "claim_type": "exclusive_write", 00:05:45.431 "claimed": true, 00:05:45.431 "driver_specific": {}, 00:05:45.431 "memory_domains": [ 00:05:45.431 { 00:05:45.431 "dma_device_id": "system", 00:05:45.431 "dma_device_type": 1 00:05:45.431 }, 00:05:45.431 { 00:05:45.431 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:45.431 "dma_device_type": 2 00:05:45.431 } 00:05:45.431 ], 00:05:45.431 "name": "Malloc0", 00:05:45.431 "num_blocks": 16384, 00:05:45.431 "product_name": "Malloc disk", 00:05:45.431 "supported_io_types": { 00:05:45.431 "abort": true, 00:05:45.431 "compare": false, 00:05:45.431 "compare_and_write": false, 00:05:45.431 "copy": true, 00:05:45.431 "flush": true, 00:05:45.431 "get_zone_info": false, 00:05:45.431 "nvme_admin": false, 00:05:45.431 "nvme_io": false, 00:05:45.431 "nvme_io_md": false, 00:05:45.431 "nvme_iov_md": false, 00:05:45.431 "read": true, 00:05:45.431 "reset": true, 00:05:45.431 "seek_data": false, 00:05:45.431 "seek_hole": false, 00:05:45.431 "unmap": true, 00:05:45.431 "write": true, 00:05:45.431 "write_zeroes": true, 00:05:45.432 "zcopy": true, 00:05:45.432 "zone_append": false, 00:05:45.432 "zone_management": false 00:05:45.432 }, 00:05:45.432 "uuid": "72a588e4-9aee-4944-9ebe-7ff9f9be8fd2", 00:05:45.432 "zoned": false 00:05:45.432 }, 00:05:45.432 { 00:05:45.432 "aliases": [ 00:05:45.432 "bb7376dd-071f-5c72-aaab-a0999d560e0c" 00:05:45.432 ], 00:05:45.432 "assigned_rate_limits": { 00:05:45.432 "r_mbytes_per_sec": 0, 00:05:45.432 "rw_ios_per_sec": 0, 00:05:45.432 "rw_mbytes_per_sec": 0, 00:05:45.432 "w_mbytes_per_sec": 0 00:05:45.432 }, 00:05:45.432 "block_size": 512, 00:05:45.432 "claimed": false, 00:05:45.432 "driver_specific": { 00:05:45.432 "passthru": { 00:05:45.432 "base_bdev_name": "Malloc0", 00:05:45.432 "name": "Passthru0" 00:05:45.432 } 00:05:45.432 }, 00:05:45.432 "memory_domains": [ 00:05:45.432 { 00:05:45.432 "dma_device_id": "system", 00:05:45.432 "dma_device_type": 1 00:05:45.432 }, 00:05:45.432 { 00:05:45.432 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:45.432 "dma_device_type": 2 00:05:45.432 } 00:05:45.432 ], 00:05:45.432 "name": "Passthru0", 00:05:45.432 "num_blocks": 16384, 00:05:45.432 "product_name": "passthru", 00:05:45.432 "supported_io_types": { 00:05:45.432 "abort": true, 00:05:45.432 "compare": false, 00:05:45.432 "compare_and_write": false, 00:05:45.432 "copy": true, 00:05:45.432 "flush": true, 00:05:45.432 "get_zone_info": false, 00:05:45.432 "nvme_admin": false, 00:05:45.432 "nvme_io": false, 00:05:45.432 "nvme_io_md": false, 00:05:45.432 "nvme_iov_md": false, 00:05:45.432 "read": true, 00:05:45.432 "reset": true, 00:05:45.432 "seek_data": false, 00:05:45.432 "seek_hole": false, 00:05:45.432 "unmap": true, 00:05:45.432 "write": true, 00:05:45.432 "write_zeroes": true, 00:05:45.432 "zcopy": true, 00:05:45.432 "zone_append": false, 00:05:45.432 "zone_management": false 00:05:45.432 }, 00:05:45.432 "uuid": "bb7376dd-071f-5c72-aaab-a0999d560e0c", 00:05:45.432 "zoned": false 00:05:45.432 } 00:05:45.432 ]' 00:05:45.432 18:12:57 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:45.690 18:12:57 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:45.690 18:12:57 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:45.690 18:12:57 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:45.690 18:12:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:45.690 18:12:57 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:45.690 18:12:57 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:45.690 18:12:57 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:45.690 18:12:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:45.690 18:12:57 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:45.690 18:12:57 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:45.690 18:12:57 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:45.690 18:12:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:45.690 18:12:57 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:45.690 18:12:57 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:45.690 18:12:57 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:45.690 18:12:57 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:45.690 00:05:45.690 real 0m0.343s 00:05:45.690 user 0m0.201s 00:05:45.690 sys 0m0.042s 00:05:45.690 18:12:57 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:45.690 18:12:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:45.690 ************************************ 00:05:45.690 END TEST rpc_integrity 00:05:45.690 ************************************ 00:05:45.690 18:12:57 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:45.690 18:12:57 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:45.690 18:12:57 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:45.690 18:12:57 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:45.690 18:12:57 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.690 ************************************ 00:05:45.690 START TEST rpc_plugins 00:05:45.690 ************************************ 00:05:45.690 18:12:57 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:05:45.690 18:12:57 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:45.690 18:12:57 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:45.690 18:12:57 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:45.690 18:12:57 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:45.690 18:12:57 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:45.690 18:12:57 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:45.690 18:12:57 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:45.690 18:12:57 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:45.690 18:12:57 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:45.691 18:12:57 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:45.691 { 00:05:45.691 "aliases": [ 00:05:45.691 "d9615d22-b0dc-4aa1-8529-18cd1e305bc2" 00:05:45.691 ], 00:05:45.691 "assigned_rate_limits": { 00:05:45.691 "r_mbytes_per_sec": 0, 00:05:45.691 "rw_ios_per_sec": 0, 00:05:45.691 "rw_mbytes_per_sec": 0, 00:05:45.691 "w_mbytes_per_sec": 0 00:05:45.691 }, 00:05:45.691 "block_size": 4096, 00:05:45.691 "claimed": false, 00:05:45.691 "driver_specific": {}, 00:05:45.691 "memory_domains": [ 00:05:45.691 { 00:05:45.691 "dma_device_id": "system", 00:05:45.691 "dma_device_type": 1 00:05:45.691 }, 00:05:45.691 { 00:05:45.691 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:45.691 "dma_device_type": 2 00:05:45.691 } 00:05:45.691 ], 00:05:45.691 "name": "Malloc1", 00:05:45.691 "num_blocks": 256, 00:05:45.691 "product_name": "Malloc disk", 00:05:45.691 "supported_io_types": { 00:05:45.691 "abort": true, 00:05:45.691 "compare": false, 00:05:45.691 "compare_and_write": false, 00:05:45.691 "copy": true, 00:05:45.691 "flush": true, 00:05:45.691 "get_zone_info": false, 00:05:45.691 "nvme_admin": false, 00:05:45.691 "nvme_io": false, 00:05:45.691 "nvme_io_md": false, 00:05:45.691 "nvme_iov_md": false, 00:05:45.691 "read": true, 00:05:45.691 "reset": true, 00:05:45.691 "seek_data": false, 00:05:45.691 "seek_hole": false, 00:05:45.691 "unmap": true, 00:05:45.691 "write": true, 00:05:45.691 "write_zeroes": true, 00:05:45.691 "zcopy": true, 00:05:45.691 "zone_append": false, 00:05:45.691 "zone_management": false 00:05:45.691 }, 00:05:45.691 "uuid": "d9615d22-b0dc-4aa1-8529-18cd1e305bc2", 00:05:45.691 "zoned": false 00:05:45.691 } 00:05:45.691 ]' 00:05:45.691 18:12:57 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:45.950 18:12:57 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:45.950 18:12:57 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:45.950 18:12:57 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:45.950 18:12:57 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:45.950 18:12:57 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:45.950 18:12:57 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:45.950 18:12:57 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:45.950 18:12:57 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:45.950 18:12:57 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:45.950 18:12:57 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:45.950 18:12:57 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:45.950 18:12:57 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:45.950 00:05:45.950 real 0m0.159s 00:05:45.950 user 0m0.094s 00:05:45.950 sys 0m0.026s 00:05:45.950 18:12:57 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:45.950 ************************************ 00:05:45.950 18:12:57 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:45.950 END TEST rpc_plugins 00:05:45.950 ************************************ 00:05:45.950 18:12:57 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:45.950 18:12:57 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:45.950 18:12:57 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:45.950 18:12:57 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:45.950 18:12:57 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.950 ************************************ 00:05:45.950 START TEST rpc_trace_cmd_test 00:05:45.950 ************************************ 00:05:45.950 18:12:57 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:05:45.950 18:12:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:45.950 18:12:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:45.950 18:12:57 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:45.950 18:12:57 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:45.950 18:12:57 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:45.950 18:12:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:45.950 "bdev": { 00:05:45.950 "mask": "0x8", 00:05:45.950 "tpoint_mask": "0xffffffffffffffff" 00:05:45.950 }, 00:05:45.950 "bdev_nvme": { 00:05:45.950 "mask": "0x4000", 00:05:45.950 "tpoint_mask": "0x0" 00:05:45.950 }, 00:05:45.950 "blobfs": { 00:05:45.950 "mask": "0x80", 00:05:45.951 "tpoint_mask": "0x0" 00:05:45.951 }, 00:05:45.951 "dsa": { 00:05:45.951 "mask": "0x200", 00:05:45.951 "tpoint_mask": "0x0" 00:05:45.951 }, 00:05:45.951 "ftl": { 00:05:45.951 "mask": "0x40", 00:05:45.951 "tpoint_mask": "0x0" 00:05:45.951 }, 00:05:45.951 "iaa": { 00:05:45.951 "mask": "0x1000", 00:05:45.951 "tpoint_mask": "0x0" 00:05:45.951 }, 00:05:45.951 "iscsi_conn": { 00:05:45.951 "mask": "0x2", 00:05:45.951 "tpoint_mask": "0x0" 00:05:45.951 }, 00:05:45.951 "nvme_pcie": { 00:05:45.951 "mask": "0x800", 00:05:45.951 "tpoint_mask": "0x0" 00:05:45.951 }, 00:05:45.951 "nvme_tcp": { 00:05:45.951 "mask": "0x2000", 00:05:45.951 "tpoint_mask": "0x0" 00:05:45.951 }, 00:05:45.951 "nvmf_rdma": { 00:05:45.951 "mask": "0x10", 00:05:45.951 "tpoint_mask": "0x0" 00:05:45.951 }, 00:05:45.951 "nvmf_tcp": { 00:05:45.951 "mask": "0x20", 00:05:45.951 "tpoint_mask": "0x0" 00:05:45.951 }, 00:05:45.951 "scsi": { 00:05:45.951 "mask": "0x4", 00:05:45.951 "tpoint_mask": "0x0" 00:05:45.951 }, 00:05:45.951 "sock": { 00:05:45.951 "mask": "0x8000", 00:05:45.951 "tpoint_mask": "0x0" 00:05:45.951 }, 00:05:45.951 "thread": { 00:05:45.951 "mask": "0x400", 00:05:45.951 "tpoint_mask": "0x0" 00:05:45.951 }, 00:05:45.951 "tpoint_group_mask": "0x8", 00:05:45.951 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid61443" 00:05:45.951 }' 00:05:45.951 18:12:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:45.951 18:12:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:45.951 18:12:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:45.951 18:12:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:46.209 18:12:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:46.209 18:12:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:46.209 18:12:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:46.209 18:12:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:46.209 18:12:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:46.209 18:12:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:46.209 00:05:46.209 real 0m0.267s 00:05:46.209 user 0m0.232s 00:05:46.209 sys 0m0.023s 00:05:46.209 18:12:58 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:46.209 ************************************ 00:05:46.209 END TEST rpc_trace_cmd_test 00:05:46.209 18:12:58 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:46.209 ************************************ 00:05:46.209 18:12:58 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:46.209 18:12:58 rpc -- rpc/rpc.sh@76 -- # [[ 1 -eq 1 ]] 00:05:46.209 18:12:58 rpc -- rpc/rpc.sh@77 -- # run_test go_rpc go_rpc 00:05:46.209 18:12:58 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:46.209 18:12:58 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:46.209 18:12:58 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.209 ************************************ 00:05:46.209 START TEST go_rpc 00:05:46.209 ************************************ 00:05:46.209 18:12:58 rpc.go_rpc -- common/autotest_common.sh@1123 -- # go_rpc 00:05:46.209 18:12:58 rpc.go_rpc -- rpc/rpc.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:05:46.209 18:12:58 rpc.go_rpc -- rpc/rpc.sh@51 -- # bdevs='[]' 00:05:46.209 18:12:58 rpc.go_rpc -- rpc/rpc.sh@52 -- # jq length 00:05:46.467 18:12:58 rpc.go_rpc -- rpc/rpc.sh@52 -- # '[' 0 == 0 ']' 00:05:46.467 18:12:58 rpc.go_rpc -- rpc/rpc.sh@54 -- # rpc_cmd bdev_malloc_create 8 512 00:05:46.467 18:12:58 rpc.go_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:46.467 18:12:58 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.467 18:12:58 rpc.go_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:46.467 18:12:58 rpc.go_rpc -- rpc/rpc.sh@54 -- # malloc=Malloc2 00:05:46.467 18:12:58 rpc.go_rpc -- rpc/rpc.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:05:46.467 18:12:58 rpc.go_rpc -- rpc/rpc.sh@56 -- # bdevs='[{"aliases":["a094f50d-1a0d-4502-a624-303d5c32227e"],"assigned_rate_limits":{"r_mbytes_per_sec":0,"rw_ios_per_sec":0,"rw_mbytes_per_sec":0,"w_mbytes_per_sec":0},"block_size":512,"claimed":false,"driver_specific":{},"memory_domains":[{"dma_device_id":"system","dma_device_type":1},{"dma_device_id":"SPDK_ACCEL_DMA_DEVICE","dma_device_type":2}],"name":"Malloc2","num_blocks":16384,"product_name":"Malloc disk","supported_io_types":{"abort":true,"compare":false,"compare_and_write":false,"copy":true,"flush":true,"get_zone_info":false,"nvme_admin":false,"nvme_io":false,"nvme_io_md":false,"nvme_iov_md":false,"read":true,"reset":true,"seek_data":false,"seek_hole":false,"unmap":true,"write":true,"write_zeroes":true,"zcopy":true,"zone_append":false,"zone_management":false},"uuid":"a094f50d-1a0d-4502-a624-303d5c32227e","zoned":false}]' 00:05:46.467 18:12:58 rpc.go_rpc -- rpc/rpc.sh@57 -- # jq length 00:05:46.467 18:12:58 rpc.go_rpc -- rpc/rpc.sh@57 -- # '[' 1 == 1 ']' 00:05:46.467 18:12:58 rpc.go_rpc -- rpc/rpc.sh@59 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:46.467 18:12:58 rpc.go_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:46.467 18:12:58 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.467 18:12:58 rpc.go_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:46.467 18:12:58 rpc.go_rpc -- rpc/rpc.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:05:46.467 18:12:58 rpc.go_rpc -- rpc/rpc.sh@60 -- # bdevs='[]' 00:05:46.467 18:12:58 rpc.go_rpc -- rpc/rpc.sh@61 -- # jq length 00:05:46.467 18:12:58 rpc.go_rpc -- rpc/rpc.sh@61 -- # '[' 0 == 0 ']' 00:05:46.467 00:05:46.467 real 0m0.249s 00:05:46.467 user 0m0.153s 00:05:46.467 sys 0m0.031s 00:05:46.467 18:12:58 rpc.go_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:46.467 18:12:58 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.467 ************************************ 00:05:46.467 END TEST go_rpc 00:05:46.467 ************************************ 00:05:46.467 18:12:58 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:46.467 18:12:58 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:46.467 18:12:58 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:46.468 18:12:58 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:46.468 18:12:58 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:46.468 18:12:58 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.468 ************************************ 00:05:46.468 START TEST rpc_daemon_integrity 00:05:46.468 ************************************ 00:05:46.468 18:12:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:46.468 18:12:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:46.468 18:12:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:46.468 18:12:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:46.468 18:12:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:46.468 18:12:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:46.468 18:12:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:46.726 18:12:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:46.726 18:12:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:46.726 18:12:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:46.726 18:12:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:46.726 18:12:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:46.726 18:12:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc3 00:05:46.726 18:12:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:46.726 18:12:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:46.726 18:12:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:46.726 18:12:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:46.726 18:12:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:46.726 { 00:05:46.726 "aliases": [ 00:05:46.726 "fa62428a-b493-408b-9dd9-0729c156a236" 00:05:46.726 ], 00:05:46.726 "assigned_rate_limits": { 00:05:46.726 "r_mbytes_per_sec": 0, 00:05:46.726 "rw_ios_per_sec": 0, 00:05:46.726 "rw_mbytes_per_sec": 0, 00:05:46.726 "w_mbytes_per_sec": 0 00:05:46.726 }, 00:05:46.726 "block_size": 512, 00:05:46.726 "claimed": false, 00:05:46.726 "driver_specific": {}, 00:05:46.726 "memory_domains": [ 00:05:46.726 { 00:05:46.726 "dma_device_id": "system", 00:05:46.726 "dma_device_type": 1 00:05:46.726 }, 00:05:46.726 { 00:05:46.726 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:46.726 "dma_device_type": 2 00:05:46.726 } 00:05:46.726 ], 00:05:46.726 "name": "Malloc3", 00:05:46.726 "num_blocks": 16384, 00:05:46.726 "product_name": "Malloc disk", 00:05:46.726 "supported_io_types": { 00:05:46.726 "abort": true, 00:05:46.726 "compare": false, 00:05:46.726 "compare_and_write": false, 00:05:46.726 "copy": true, 00:05:46.726 "flush": true, 00:05:46.726 "get_zone_info": false, 00:05:46.726 "nvme_admin": false, 00:05:46.726 "nvme_io": false, 00:05:46.726 "nvme_io_md": false, 00:05:46.726 "nvme_iov_md": false, 00:05:46.726 "read": true, 00:05:46.726 "reset": true, 00:05:46.726 "seek_data": false, 00:05:46.726 "seek_hole": false, 00:05:46.726 "unmap": true, 00:05:46.726 "write": true, 00:05:46.726 "write_zeroes": true, 00:05:46.726 "zcopy": true, 00:05:46.726 "zone_append": false, 00:05:46.726 "zone_management": false 00:05:46.726 }, 00:05:46.726 "uuid": "fa62428a-b493-408b-9dd9-0729c156a236", 00:05:46.726 "zoned": false 00:05:46.726 } 00:05:46.726 ]' 00:05:46.726 18:12:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:46.726 18:12:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:46.726 18:12:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc3 -p Passthru0 00:05:46.726 18:12:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:46.726 18:12:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:46.726 [2024-07-22 18:12:58.639187] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:05:46.726 [2024-07-22 18:12:58.639262] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:46.726 [2024-07-22 18:12:58.639302] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:05:46.726 [2024-07-22 18:12:58.639316] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:46.726 [2024-07-22 18:12:58.642350] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:46.726 [2024-07-22 18:12:58.642394] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:46.726 Passthru0 00:05:46.726 18:12:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:46.726 18:12:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:46.726 18:12:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:46.726 18:12:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:46.726 18:12:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:46.726 18:12:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:46.726 { 00:05:46.726 "aliases": [ 00:05:46.726 "fa62428a-b493-408b-9dd9-0729c156a236" 00:05:46.726 ], 00:05:46.726 "assigned_rate_limits": { 00:05:46.726 "r_mbytes_per_sec": 0, 00:05:46.726 "rw_ios_per_sec": 0, 00:05:46.726 "rw_mbytes_per_sec": 0, 00:05:46.726 "w_mbytes_per_sec": 0 00:05:46.726 }, 00:05:46.726 "block_size": 512, 00:05:46.726 "claim_type": "exclusive_write", 00:05:46.726 "claimed": true, 00:05:46.726 "driver_specific": {}, 00:05:46.726 "memory_domains": [ 00:05:46.726 { 00:05:46.726 "dma_device_id": "system", 00:05:46.726 "dma_device_type": 1 00:05:46.726 }, 00:05:46.726 { 00:05:46.726 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:46.726 "dma_device_type": 2 00:05:46.726 } 00:05:46.726 ], 00:05:46.726 "name": "Malloc3", 00:05:46.726 "num_blocks": 16384, 00:05:46.726 "product_name": "Malloc disk", 00:05:46.726 "supported_io_types": { 00:05:46.726 "abort": true, 00:05:46.726 "compare": false, 00:05:46.726 "compare_and_write": false, 00:05:46.727 "copy": true, 00:05:46.727 "flush": true, 00:05:46.727 "get_zone_info": false, 00:05:46.727 "nvme_admin": false, 00:05:46.727 "nvme_io": false, 00:05:46.727 "nvme_io_md": false, 00:05:46.727 "nvme_iov_md": false, 00:05:46.727 "read": true, 00:05:46.727 "reset": true, 00:05:46.727 "seek_data": false, 00:05:46.727 "seek_hole": false, 00:05:46.727 "unmap": true, 00:05:46.727 "write": true, 00:05:46.727 "write_zeroes": true, 00:05:46.727 "zcopy": true, 00:05:46.727 "zone_append": false, 00:05:46.727 "zone_management": false 00:05:46.727 }, 00:05:46.727 "uuid": "fa62428a-b493-408b-9dd9-0729c156a236", 00:05:46.727 "zoned": false 00:05:46.727 }, 00:05:46.727 { 00:05:46.727 "aliases": [ 00:05:46.727 "7c472757-ee17-5506-bffa-a948dc2e2fd7" 00:05:46.727 ], 00:05:46.727 "assigned_rate_limits": { 00:05:46.727 "r_mbytes_per_sec": 0, 00:05:46.727 "rw_ios_per_sec": 0, 00:05:46.727 "rw_mbytes_per_sec": 0, 00:05:46.727 "w_mbytes_per_sec": 0 00:05:46.727 }, 00:05:46.727 "block_size": 512, 00:05:46.727 "claimed": false, 00:05:46.727 "driver_specific": { 00:05:46.727 "passthru": { 00:05:46.727 "base_bdev_name": "Malloc3", 00:05:46.727 "name": "Passthru0" 00:05:46.727 } 00:05:46.727 }, 00:05:46.727 "memory_domains": [ 00:05:46.727 { 00:05:46.727 "dma_device_id": "system", 00:05:46.727 "dma_device_type": 1 00:05:46.727 }, 00:05:46.727 { 00:05:46.727 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:46.727 "dma_device_type": 2 00:05:46.727 } 00:05:46.727 ], 00:05:46.727 "name": "Passthru0", 00:05:46.727 "num_blocks": 16384, 00:05:46.727 "product_name": "passthru", 00:05:46.727 "supported_io_types": { 00:05:46.727 "abort": true, 00:05:46.727 "compare": false, 00:05:46.727 "compare_and_write": false, 00:05:46.727 "copy": true, 00:05:46.727 "flush": true, 00:05:46.727 "get_zone_info": false, 00:05:46.727 "nvme_admin": false, 00:05:46.727 "nvme_io": false, 00:05:46.727 "nvme_io_md": false, 00:05:46.727 "nvme_iov_md": false, 00:05:46.727 "read": true, 00:05:46.727 "reset": true, 00:05:46.727 "seek_data": false, 00:05:46.727 "seek_hole": false, 00:05:46.727 "unmap": true, 00:05:46.727 "write": true, 00:05:46.727 "write_zeroes": true, 00:05:46.727 "zcopy": true, 00:05:46.727 "zone_append": false, 00:05:46.727 "zone_management": false 00:05:46.727 }, 00:05:46.727 "uuid": "7c472757-ee17-5506-bffa-a948dc2e2fd7", 00:05:46.727 "zoned": false 00:05:46.727 } 00:05:46.727 ]' 00:05:46.727 18:12:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:46.727 18:12:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:46.727 18:12:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:46.727 18:12:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:46.727 18:12:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:46.985 18:12:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:46.985 18:12:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc3 00:05:46.985 18:12:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:46.985 18:12:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:46.985 18:12:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:46.985 18:12:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:46.985 18:12:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:46.985 18:12:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:46.985 18:12:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:46.985 18:12:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:46.985 18:12:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:46.985 18:12:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:46.985 00:05:46.985 real 0m0.371s 00:05:46.985 user 0m0.232s 00:05:46.985 sys 0m0.039s 00:05:46.985 18:12:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:46.985 18:12:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:46.985 ************************************ 00:05:46.985 END TEST rpc_daemon_integrity 00:05:46.985 ************************************ 00:05:46.985 18:12:58 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:46.985 18:12:58 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:46.985 18:12:58 rpc -- rpc/rpc.sh@84 -- # killprocess 61443 00:05:46.985 18:12:58 rpc -- common/autotest_common.sh@948 -- # '[' -z 61443 ']' 00:05:46.985 18:12:58 rpc -- common/autotest_common.sh@952 -- # kill -0 61443 00:05:46.985 18:12:58 rpc -- common/autotest_common.sh@953 -- # uname 00:05:46.985 18:12:58 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:46.985 18:12:58 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61443 00:05:46.985 18:12:58 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:46.985 18:12:58 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:46.985 killing process with pid 61443 00:05:46.985 18:12:58 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61443' 00:05:46.985 18:12:58 rpc -- common/autotest_common.sh@967 -- # kill 61443 00:05:46.985 18:12:58 rpc -- common/autotest_common.sh@972 -- # wait 61443 00:05:49.517 00:05:49.517 real 0m5.391s 00:05:49.517 user 0m6.248s 00:05:49.517 sys 0m0.931s 00:05:49.517 18:13:01 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:49.517 ************************************ 00:05:49.517 END TEST rpc 00:05:49.517 ************************************ 00:05:49.517 18:13:01 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.517 18:13:01 -- common/autotest_common.sh@1142 -- # return 0 00:05:49.517 18:13:01 -- spdk/autotest.sh@170 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:49.517 18:13:01 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:49.517 18:13:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:49.517 18:13:01 -- common/autotest_common.sh@10 -- # set +x 00:05:49.517 ************************************ 00:05:49.517 START TEST skip_rpc 00:05:49.517 ************************************ 00:05:49.517 18:13:01 skip_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:49.517 * Looking for test storage... 00:05:49.517 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:49.517 18:13:01 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:49.517 18:13:01 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:49.517 18:13:01 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:49.517 18:13:01 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:49.517 18:13:01 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:49.518 18:13:01 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.518 ************************************ 00:05:49.518 START TEST skip_rpc 00:05:49.518 ************************************ 00:05:49.518 18:13:01 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:05:49.518 18:13:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=61727 00:05:49.518 18:13:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:49.518 18:13:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:49.518 18:13:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:49.518 [2024-07-22 18:13:01.452869] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:05:49.518 [2024-07-22 18:13:01.453037] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61727 ] 00:05:49.775 [2024-07-22 18:13:01.618665] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.033 [2024-07-22 18:13:01.870565] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.307 18:13:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:55.307 18:13:06 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:55.307 18:13:06 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:55.307 18:13:06 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:55.307 18:13:06 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:55.307 18:13:06 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:55.307 18:13:06 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:55.307 18:13:06 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:05:55.307 18:13:06 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:55.307 18:13:06 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.307 2024/07/22 18:13:06 error on client creation, err: error during client creation for Unix socket, err: could not connect to a Unix socket on address /var/tmp/spdk.sock, err: dial unix /var/tmp/spdk.sock: connect: no such file or directory 00:05:55.307 18:13:06 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:55.307 18:13:06 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:55.307 18:13:06 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:55.307 18:13:06 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:55.307 18:13:06 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:55.307 18:13:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:55.307 18:13:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 61727 00:05:55.307 18:13:06 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 61727 ']' 00:05:55.307 18:13:06 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 61727 00:05:55.308 18:13:06 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:05:55.308 18:13:06 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:55.308 18:13:06 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61727 00:05:55.308 killing process with pid 61727 00:05:55.308 18:13:06 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:55.308 18:13:06 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:55.308 18:13:06 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61727' 00:05:55.308 18:13:06 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 61727 00:05:55.308 18:13:06 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 61727 00:05:57.207 00:05:57.207 real 0m7.496s 00:05:57.207 user 0m6.845s 00:05:57.207 sys 0m0.536s 00:05:57.207 18:13:08 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:57.207 18:13:08 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:57.207 ************************************ 00:05:57.207 END TEST skip_rpc 00:05:57.207 ************************************ 00:05:57.207 18:13:08 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:57.207 18:13:08 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:57.207 18:13:08 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:57.207 18:13:08 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:57.207 18:13:08 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:57.207 ************************************ 00:05:57.207 START TEST skip_rpc_with_json 00:05:57.207 ************************************ 00:05:57.207 18:13:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:05:57.207 18:13:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:57.207 18:13:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=61848 00:05:57.207 18:13:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:57.207 18:13:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:57.207 18:13:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 61848 00:05:57.207 18:13:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 61848 ']' 00:05:57.207 18:13:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:57.207 18:13:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:57.207 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:57.207 18:13:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:57.207 18:13:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:57.207 18:13:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:57.207 [2024-07-22 18:13:09.025389] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:05:57.207 [2024-07-22 18:13:09.025589] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61848 ] 00:05:57.207 [2024-07-22 18:13:09.196777] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.465 [2024-07-22 18:13:09.479562] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.439 18:13:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:58.439 18:13:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:05:58.439 18:13:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:58.439 18:13:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:58.439 18:13:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:58.439 [2024-07-22 18:13:10.409924] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:58.439 2024/07/22 18:13:10 error on JSON-RPC call, method: nvmf_get_transports, params: map[trtype:tcp], err: error received for nvmf_get_transports method, err: Code=-19 Msg=No such device 00:05:58.439 request: 00:05:58.439 { 00:05:58.439 "method": "nvmf_get_transports", 00:05:58.439 "params": { 00:05:58.439 "trtype": "tcp" 00:05:58.439 } 00:05:58.439 } 00:05:58.439 Got JSON-RPC error response 00:05:58.439 GoRPCClient: error on JSON-RPC call 00:05:58.439 18:13:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:58.439 18:13:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:58.439 18:13:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:58.439 18:13:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:58.439 [2024-07-22 18:13:10.418011] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:58.439 18:13:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:58.439 18:13:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:58.439 18:13:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:58.439 18:13:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:58.697 18:13:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:58.697 18:13:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:58.697 { 00:05:58.697 "subsystems": [ 00:05:58.697 { 00:05:58.697 "subsystem": "vfio_user_target", 00:05:58.697 "config": null 00:05:58.697 }, 00:05:58.697 { 00:05:58.697 "subsystem": "keyring", 00:05:58.697 "config": [] 00:05:58.697 }, 00:05:58.697 { 00:05:58.697 "subsystem": "iobuf", 00:05:58.697 "config": [ 00:05:58.697 { 00:05:58.697 "method": "iobuf_set_options", 00:05:58.697 "params": { 00:05:58.697 "large_bufsize": 135168, 00:05:58.697 "large_pool_count": 1024, 00:05:58.697 "small_bufsize": 8192, 00:05:58.697 "small_pool_count": 8192 00:05:58.697 } 00:05:58.697 } 00:05:58.697 ] 00:05:58.697 }, 00:05:58.697 { 00:05:58.697 "subsystem": "sock", 00:05:58.697 "config": [ 00:05:58.697 { 00:05:58.697 "method": "sock_set_default_impl", 00:05:58.697 "params": { 00:05:58.697 "impl_name": "posix" 00:05:58.697 } 00:05:58.697 }, 00:05:58.697 { 00:05:58.697 "method": "sock_impl_set_options", 00:05:58.697 "params": { 00:05:58.697 "enable_ktls": false, 00:05:58.697 "enable_placement_id": 0, 00:05:58.697 "enable_quickack": false, 00:05:58.697 "enable_recv_pipe": true, 00:05:58.697 "enable_zerocopy_send_client": false, 00:05:58.697 "enable_zerocopy_send_server": true, 00:05:58.697 "impl_name": "ssl", 00:05:58.697 "recv_buf_size": 4096, 00:05:58.697 "send_buf_size": 4096, 00:05:58.697 "tls_version": 0, 00:05:58.697 "zerocopy_threshold": 0 00:05:58.697 } 00:05:58.697 }, 00:05:58.697 { 00:05:58.697 "method": "sock_impl_set_options", 00:05:58.697 "params": { 00:05:58.697 "enable_ktls": false, 00:05:58.697 "enable_placement_id": 0, 00:05:58.697 "enable_quickack": false, 00:05:58.697 "enable_recv_pipe": true, 00:05:58.697 "enable_zerocopy_send_client": false, 00:05:58.697 "enable_zerocopy_send_server": true, 00:05:58.697 "impl_name": "posix", 00:05:58.697 "recv_buf_size": 2097152, 00:05:58.697 "send_buf_size": 2097152, 00:05:58.697 "tls_version": 0, 00:05:58.697 "zerocopy_threshold": 0 00:05:58.697 } 00:05:58.697 } 00:05:58.697 ] 00:05:58.697 }, 00:05:58.697 { 00:05:58.697 "subsystem": "vmd", 00:05:58.697 "config": [] 00:05:58.697 }, 00:05:58.697 { 00:05:58.697 "subsystem": "accel", 00:05:58.697 "config": [ 00:05:58.697 { 00:05:58.697 "method": "accel_set_options", 00:05:58.697 "params": { 00:05:58.697 "buf_count": 2048, 00:05:58.697 "large_cache_size": 16, 00:05:58.697 "sequence_count": 2048, 00:05:58.697 "small_cache_size": 128, 00:05:58.697 "task_count": 2048 00:05:58.698 } 00:05:58.698 } 00:05:58.698 ] 00:05:58.698 }, 00:05:58.698 { 00:05:58.698 "subsystem": "bdev", 00:05:58.698 "config": [ 00:05:58.698 { 00:05:58.698 "method": "bdev_set_options", 00:05:58.698 "params": { 00:05:58.698 "bdev_auto_examine": true, 00:05:58.698 "bdev_io_cache_size": 256, 00:05:58.698 "bdev_io_pool_size": 65535, 00:05:58.698 "iobuf_large_cache_size": 16, 00:05:58.698 "iobuf_small_cache_size": 128 00:05:58.698 } 00:05:58.698 }, 00:05:58.698 { 00:05:58.698 "method": "bdev_raid_set_options", 00:05:58.698 "params": { 00:05:58.698 "process_max_bandwidth_mb_sec": 0, 00:05:58.698 "process_window_size_kb": 1024 00:05:58.698 } 00:05:58.698 }, 00:05:58.698 { 00:05:58.698 "method": "bdev_iscsi_set_options", 00:05:58.698 "params": { 00:05:58.698 "timeout_sec": 30 00:05:58.698 } 00:05:58.698 }, 00:05:58.698 { 00:05:58.698 "method": "bdev_nvme_set_options", 00:05:58.698 "params": { 00:05:58.698 "action_on_timeout": "none", 00:05:58.698 "allow_accel_sequence": false, 00:05:58.698 "arbitration_burst": 0, 00:05:58.698 "bdev_retry_count": 3, 00:05:58.698 "ctrlr_loss_timeout_sec": 0, 00:05:58.698 "delay_cmd_submit": true, 00:05:58.698 "dhchap_dhgroups": [ 00:05:58.698 "null", 00:05:58.698 "ffdhe2048", 00:05:58.698 "ffdhe3072", 00:05:58.698 "ffdhe4096", 00:05:58.698 "ffdhe6144", 00:05:58.698 "ffdhe8192" 00:05:58.698 ], 00:05:58.698 "dhchap_digests": [ 00:05:58.698 "sha256", 00:05:58.698 "sha384", 00:05:58.698 "sha512" 00:05:58.698 ], 00:05:58.698 "disable_auto_failback": false, 00:05:58.698 "fast_io_fail_timeout_sec": 0, 00:05:58.698 "generate_uuids": false, 00:05:58.698 "high_priority_weight": 0, 00:05:58.698 "io_path_stat": false, 00:05:58.698 "io_queue_requests": 0, 00:05:58.698 "keep_alive_timeout_ms": 10000, 00:05:58.698 "low_priority_weight": 0, 00:05:58.698 "medium_priority_weight": 0, 00:05:58.698 "nvme_adminq_poll_period_us": 10000, 00:05:58.698 "nvme_error_stat": false, 00:05:58.698 "nvme_ioq_poll_period_us": 0, 00:05:58.698 "rdma_cm_event_timeout_ms": 0, 00:05:58.698 "rdma_max_cq_size": 0, 00:05:58.698 "rdma_srq_size": 0, 00:05:58.698 "reconnect_delay_sec": 0, 00:05:58.698 "timeout_admin_us": 0, 00:05:58.698 "timeout_us": 0, 00:05:58.698 "transport_ack_timeout": 0, 00:05:58.698 "transport_retry_count": 4, 00:05:58.698 "transport_tos": 0 00:05:58.698 } 00:05:58.698 }, 00:05:58.698 { 00:05:58.698 "method": "bdev_nvme_set_hotplug", 00:05:58.698 "params": { 00:05:58.698 "enable": false, 00:05:58.698 "period_us": 100000 00:05:58.698 } 00:05:58.698 }, 00:05:58.698 { 00:05:58.698 "method": "bdev_wait_for_examine" 00:05:58.698 } 00:05:58.698 ] 00:05:58.698 }, 00:05:58.698 { 00:05:58.698 "subsystem": "scsi", 00:05:58.698 "config": null 00:05:58.698 }, 00:05:58.698 { 00:05:58.698 "subsystem": "scheduler", 00:05:58.698 "config": [ 00:05:58.698 { 00:05:58.698 "method": "framework_set_scheduler", 00:05:58.698 "params": { 00:05:58.698 "name": "static" 00:05:58.698 } 00:05:58.698 } 00:05:58.698 ] 00:05:58.698 }, 00:05:58.698 { 00:05:58.698 "subsystem": "vhost_scsi", 00:05:58.698 "config": [] 00:05:58.698 }, 00:05:58.698 { 00:05:58.698 "subsystem": "vhost_blk", 00:05:58.698 "config": [] 00:05:58.698 }, 00:05:58.698 { 00:05:58.698 "subsystem": "ublk", 00:05:58.698 "config": [] 00:05:58.698 }, 00:05:58.698 { 00:05:58.698 "subsystem": "nbd", 00:05:58.698 "config": [] 00:05:58.698 }, 00:05:58.698 { 00:05:58.698 "subsystem": "nvmf", 00:05:58.698 "config": [ 00:05:58.698 { 00:05:58.698 "method": "nvmf_set_config", 00:05:58.698 "params": { 00:05:58.698 "admin_cmd_passthru": { 00:05:58.698 "identify_ctrlr": false 00:05:58.698 }, 00:05:58.698 "discovery_filter": "match_any" 00:05:58.698 } 00:05:58.698 }, 00:05:58.698 { 00:05:58.698 "method": "nvmf_set_max_subsystems", 00:05:58.698 "params": { 00:05:58.698 "max_subsystems": 1024 00:05:58.698 } 00:05:58.698 }, 00:05:58.698 { 00:05:58.698 "method": "nvmf_set_crdt", 00:05:58.698 "params": { 00:05:58.698 "crdt1": 0, 00:05:58.698 "crdt2": 0, 00:05:58.698 "crdt3": 0 00:05:58.698 } 00:05:58.698 }, 00:05:58.698 { 00:05:58.698 "method": "nvmf_create_transport", 00:05:58.698 "params": { 00:05:58.698 "abort_timeout_sec": 1, 00:05:58.698 "ack_timeout": 0, 00:05:58.698 "buf_cache_size": 4294967295, 00:05:58.698 "c2h_success": true, 00:05:58.698 "data_wr_pool_size": 0, 00:05:58.698 "dif_insert_or_strip": false, 00:05:58.698 "in_capsule_data_size": 4096, 00:05:58.698 "io_unit_size": 131072, 00:05:58.698 "max_aq_depth": 128, 00:05:58.698 "max_io_qpairs_per_ctrlr": 127, 00:05:58.698 "max_io_size": 131072, 00:05:58.698 "max_queue_depth": 128, 00:05:58.698 "num_shared_buffers": 511, 00:05:58.698 "sock_priority": 0, 00:05:58.698 "trtype": "TCP", 00:05:58.698 "zcopy": false 00:05:58.698 } 00:05:58.698 } 00:05:58.698 ] 00:05:58.698 }, 00:05:58.698 { 00:05:58.698 "subsystem": "iscsi", 00:05:58.698 "config": [ 00:05:58.698 { 00:05:58.698 "method": "iscsi_set_options", 00:05:58.698 "params": { 00:05:58.698 "allow_duplicated_isid": false, 00:05:58.698 "chap_group": 0, 00:05:58.698 "data_out_pool_size": 2048, 00:05:58.698 "default_time2retain": 20, 00:05:58.698 "default_time2wait": 2, 00:05:58.698 "disable_chap": false, 00:05:58.698 "error_recovery_level": 0, 00:05:58.698 "first_burst_length": 8192, 00:05:58.698 "immediate_data": true, 00:05:58.698 "immediate_data_pool_size": 16384, 00:05:58.698 "max_connections_per_session": 2, 00:05:58.698 "max_large_datain_per_connection": 64, 00:05:58.698 "max_queue_depth": 64, 00:05:58.698 "max_r2t_per_connection": 4, 00:05:58.698 "max_sessions": 128, 00:05:58.698 "mutual_chap": false, 00:05:58.698 "node_base": "iqn.2016-06.io.spdk", 00:05:58.698 "nop_in_interval": 30, 00:05:58.698 "nop_timeout": 60, 00:05:58.698 "pdu_pool_size": 36864, 00:05:58.698 "require_chap": false 00:05:58.698 } 00:05:58.698 } 00:05:58.698 ] 00:05:58.698 } 00:05:58.698 ] 00:05:58.698 } 00:05:58.698 18:13:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:58.698 18:13:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 61848 00:05:58.698 18:13:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 61848 ']' 00:05:58.698 18:13:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 61848 00:05:58.698 18:13:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:58.698 18:13:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:58.698 18:13:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61848 00:05:58.698 18:13:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:58.698 18:13:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:58.698 killing process with pid 61848 00:05:58.698 18:13:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61848' 00:05:58.698 18:13:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 61848 00:05:58.698 18:13:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 61848 00:06:01.229 18:13:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=61916 00:06:01.229 18:13:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:01.229 18:13:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:06.494 18:13:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 61916 00:06:06.494 18:13:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 61916 ']' 00:06:06.494 18:13:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 61916 00:06:06.494 18:13:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:06:06.494 18:13:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:06.494 18:13:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61916 00:06:06.494 18:13:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:06.494 18:13:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:06.494 killing process with pid 61916 00:06:06.494 18:13:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61916' 00:06:06.494 18:13:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 61916 00:06:06.494 18:13:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 61916 00:06:08.399 18:13:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:08.399 18:13:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:08.400 00:06:08.400 real 0m11.410s 00:06:08.400 user 0m10.694s 00:06:08.400 sys 0m1.113s 00:06:08.400 18:13:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:08.400 18:13:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:08.400 ************************************ 00:06:08.400 END TEST skip_rpc_with_json 00:06:08.400 ************************************ 00:06:08.400 18:13:20 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:08.400 18:13:20 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:08.400 18:13:20 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:08.400 18:13:20 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:08.400 18:13:20 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:08.400 ************************************ 00:06:08.400 START TEST skip_rpc_with_delay 00:06:08.400 ************************************ 00:06:08.400 18:13:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:06:08.400 18:13:20 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:08.400 18:13:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:06:08.400 18:13:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:08.400 18:13:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:08.400 18:13:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:08.400 18:13:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:08.400 18:13:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:08.400 18:13:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:08.400 18:13:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:08.400 18:13:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:08.400 18:13:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:08.400 18:13:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:08.659 [2024-07-22 18:13:20.490867] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:08.659 [2024-07-22 18:13:20.491121] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:06:08.659 18:13:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:06:08.659 18:13:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:08.659 18:13:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:08.659 18:13:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:08.659 00:06:08.659 real 0m0.204s 00:06:08.659 user 0m0.118s 00:06:08.659 sys 0m0.084s 00:06:08.659 18:13:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:08.659 18:13:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:08.659 ************************************ 00:06:08.659 END TEST skip_rpc_with_delay 00:06:08.659 ************************************ 00:06:08.659 18:13:20 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:08.659 18:13:20 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:08.659 18:13:20 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:08.659 18:13:20 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:08.659 18:13:20 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:08.659 18:13:20 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:08.659 18:13:20 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:08.659 ************************************ 00:06:08.659 START TEST exit_on_failed_rpc_init 00:06:08.659 ************************************ 00:06:08.659 18:13:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:06:08.659 18:13:20 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=62044 00:06:08.659 18:13:20 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 62044 00:06:08.659 18:13:20 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:08.659 18:13:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 62044 ']' 00:06:08.659 18:13:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:08.659 18:13:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:08.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:08.659 18:13:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:08.659 18:13:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:08.659 18:13:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:08.917 [2024-07-22 18:13:20.767677] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:06:08.917 [2024-07-22 18:13:20.767918] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62044 ] 00:06:09.175 [2024-07-22 18:13:20.946128] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.434 [2024-07-22 18:13:21.206803] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.368 18:13:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:10.368 18:13:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:06:10.368 18:13:22 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:10.368 18:13:22 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:10.368 18:13:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:06:10.368 18:13:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:10.368 18:13:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:10.368 18:13:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:10.368 18:13:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:10.368 18:13:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:10.368 18:13:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:10.368 18:13:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:10.368 18:13:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:10.368 18:13:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:10.368 18:13:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:10.369 [2024-07-22 18:13:22.194542] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:06:10.369 [2024-07-22 18:13:22.194727] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62080 ] 00:06:10.369 [2024-07-22 18:13:22.374335] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.936 [2024-07-22 18:13:22.658411] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:10.936 [2024-07-22 18:13:22.658564] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:10.936 [2024-07-22 18:13:22.658599] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:10.936 [2024-07-22 18:13:22.658625] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:11.194 18:13:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:06:11.194 18:13:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:11.194 18:13:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:06:11.194 18:13:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:06:11.194 18:13:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:06:11.194 18:13:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:11.194 18:13:23 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:11.194 18:13:23 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 62044 00:06:11.194 18:13:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 62044 ']' 00:06:11.194 18:13:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 62044 00:06:11.194 18:13:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:06:11.194 18:13:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:11.194 18:13:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62044 00:06:11.194 18:13:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:11.194 18:13:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:11.194 killing process with pid 62044 00:06:11.194 18:13:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62044' 00:06:11.194 18:13:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 62044 00:06:11.194 18:13:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 62044 00:06:13.732 00:06:13.732 real 0m4.920s 00:06:13.732 user 0m5.582s 00:06:13.732 sys 0m0.792s 00:06:13.732 18:13:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:13.732 18:13:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:13.732 ************************************ 00:06:13.732 END TEST exit_on_failed_rpc_init 00:06:13.732 ************************************ 00:06:13.732 18:13:25 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:13.732 18:13:25 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:13.732 00:06:13.732 real 0m24.342s 00:06:13.732 user 0m23.346s 00:06:13.732 sys 0m2.716s 00:06:13.732 18:13:25 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:13.732 18:13:25 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:13.732 ************************************ 00:06:13.732 END TEST skip_rpc 00:06:13.732 ************************************ 00:06:13.732 18:13:25 -- common/autotest_common.sh@1142 -- # return 0 00:06:13.732 18:13:25 -- spdk/autotest.sh@171 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:13.732 18:13:25 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:13.732 18:13:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:13.732 18:13:25 -- common/autotest_common.sh@10 -- # set +x 00:06:13.732 ************************************ 00:06:13.732 START TEST rpc_client 00:06:13.732 ************************************ 00:06:13.732 18:13:25 rpc_client -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:13.732 * Looking for test storage... 00:06:13.732 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:06:13.732 18:13:25 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:06:13.992 OK 00:06:13.992 18:13:25 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:13.992 00:06:13.992 real 0m0.139s 00:06:13.992 user 0m0.072s 00:06:13.992 sys 0m0.073s 00:06:13.992 18:13:25 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:13.992 18:13:25 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:13.992 ************************************ 00:06:13.992 END TEST rpc_client 00:06:13.992 ************************************ 00:06:13.992 18:13:25 -- common/autotest_common.sh@1142 -- # return 0 00:06:13.992 18:13:25 -- spdk/autotest.sh@172 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:13.992 18:13:25 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:13.992 18:13:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:13.992 18:13:25 -- common/autotest_common.sh@10 -- # set +x 00:06:13.992 ************************************ 00:06:13.992 START TEST json_config 00:06:13.992 ************************************ 00:06:13.992 18:13:25 json_config -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:13.992 18:13:25 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:13.992 18:13:25 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:13.992 18:13:25 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:13.992 18:13:25 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:13.992 18:13:25 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:13.992 18:13:25 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:13.992 18:13:25 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:13.992 18:13:25 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:13.992 18:13:25 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:13.992 18:13:25 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:13.992 18:13:25 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:13.992 18:13:25 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:13.992 18:13:25 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:06:13.992 18:13:25 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=0b8484e2-e129-4a11-8748-0b3c728771da 00:06:13.992 18:13:25 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:13.992 18:13:25 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:13.992 18:13:25 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:13.992 18:13:25 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:13.992 18:13:25 json_config -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:13.992 18:13:25 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:13.992 18:13:25 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:13.992 18:13:25 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:13.993 18:13:25 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:13.993 18:13:25 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:13.993 18:13:25 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:13.993 18:13:25 json_config -- paths/export.sh@5 -- # export PATH 00:06:13.993 18:13:25 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:13.993 18:13:25 json_config -- nvmf/common.sh@47 -- # : 0 00:06:13.993 18:13:25 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:13.993 18:13:25 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:13.993 18:13:25 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:13.993 18:13:25 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:13.993 18:13:25 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:13.993 18:13:25 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:13.993 18:13:25 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:13.993 18:13:25 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:13.993 18:13:25 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:13.993 18:13:25 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:13.993 18:13:25 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:13.993 18:13:25 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:13.993 18:13:25 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:13.993 18:13:25 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:06:13.993 18:13:25 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:06:13.993 18:13:25 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:06:13.993 18:13:25 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:06:13.993 18:13:25 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:06:13.993 18:13:25 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:06:13.993 18:13:25 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:06:13.993 18:13:25 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:06:13.993 18:13:25 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:06:13.993 18:13:25 json_config -- json_config/json_config.sh@359 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:13.993 18:13:25 json_config -- json_config/json_config.sh@360 -- # echo 'INFO: JSON configuration test init' 00:06:13.993 INFO: JSON configuration test init 00:06:13.993 18:13:25 json_config -- json_config/json_config.sh@361 -- # json_config_test_init 00:06:13.993 18:13:25 json_config -- json_config/json_config.sh@266 -- # timing_enter json_config_test_init 00:06:13.993 18:13:25 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:13.993 18:13:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:13.993 18:13:25 json_config -- json_config/json_config.sh@267 -- # timing_enter json_config_setup_target 00:06:13.993 18:13:25 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:13.993 18:13:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:13.993 18:13:25 json_config -- json_config/json_config.sh@269 -- # json_config_test_start_app target --wait-for-rpc 00:06:13.993 18:13:25 json_config -- json_config/common.sh@9 -- # local app=target 00:06:13.993 18:13:25 json_config -- json_config/common.sh@10 -- # shift 00:06:13.993 18:13:25 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:13.993 18:13:25 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:13.993 18:13:25 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:13.993 18:13:25 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:13.993 18:13:25 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:13.993 18:13:25 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=62234 00:06:13.993 18:13:25 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:06:13.993 Waiting for target to run... 00:06:13.993 18:13:25 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:13.993 18:13:25 json_config -- json_config/common.sh@25 -- # waitforlisten 62234 /var/tmp/spdk_tgt.sock 00:06:13.993 18:13:25 json_config -- common/autotest_common.sh@829 -- # '[' -z 62234 ']' 00:06:13.993 18:13:25 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:13.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:13.993 18:13:25 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:13.993 18:13:25 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:13.993 18:13:25 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:13.993 18:13:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:14.251 [2024-07-22 18:13:26.023079] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:06:14.251 [2024-07-22 18:13:26.023271] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62234 ] 00:06:14.510 [2024-07-22 18:13:26.475763] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.768 [2024-07-22 18:13:26.697762] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.026 00:06:15.026 18:13:26 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:15.026 18:13:26 json_config -- common/autotest_common.sh@862 -- # return 0 00:06:15.026 18:13:26 json_config -- json_config/common.sh@26 -- # echo '' 00:06:15.026 18:13:26 json_config -- json_config/json_config.sh@273 -- # create_accel_config 00:06:15.026 18:13:26 json_config -- json_config/json_config.sh@97 -- # timing_enter create_accel_config 00:06:15.026 18:13:26 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:15.026 18:13:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:15.026 18:13:26 json_config -- json_config/json_config.sh@99 -- # [[ 0 -eq 1 ]] 00:06:15.026 18:13:26 json_config -- json_config/json_config.sh@105 -- # timing_exit create_accel_config 00:06:15.026 18:13:26 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:15.026 18:13:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:15.027 18:13:27 json_config -- json_config/json_config.sh@277 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:06:15.027 18:13:27 json_config -- json_config/json_config.sh@278 -- # tgt_rpc load_config 00:06:15.027 18:13:27 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:06:16.397 18:13:28 json_config -- json_config/json_config.sh@280 -- # tgt_check_notification_types 00:06:16.397 18:13:28 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:06:16.397 18:13:28 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:16.397 18:13:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:16.397 18:13:28 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:06:16.397 18:13:28 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:06:16.397 18:13:28 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:06:16.397 18:13:28 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:06:16.397 18:13:28 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:06:16.397 18:13:28 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:16.654 18:13:28 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:06:16.654 18:13:28 json_config -- json_config/json_config.sh@48 -- # local get_types 00:06:16.654 18:13:28 json_config -- json_config/json_config.sh@50 -- # local type_diff 00:06:16.654 18:13:28 json_config -- json_config/json_config.sh@51 -- # tr ' ' '\n' 00:06:16.654 18:13:28 json_config -- json_config/json_config.sh@51 -- # echo bdev_register bdev_unregister bdev_register bdev_unregister 00:06:16.654 18:13:28 json_config -- json_config/json_config.sh@51 -- # sort 00:06:16.654 18:13:28 json_config -- json_config/json_config.sh@51 -- # uniq -u 00:06:16.654 18:13:28 json_config -- json_config/json_config.sh@51 -- # type_diff= 00:06:16.654 18:13:28 json_config -- json_config/json_config.sh@53 -- # [[ -n '' ]] 00:06:16.654 18:13:28 json_config -- json_config/json_config.sh@58 -- # timing_exit tgt_check_notification_types 00:06:16.654 18:13:28 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:16.654 18:13:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:16.654 18:13:28 json_config -- json_config/json_config.sh@59 -- # return 0 00:06:16.654 18:13:28 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:06:16.654 18:13:28 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:06:16.654 18:13:28 json_config -- json_config/json_config.sh@290 -- # [[ 0 -eq 1 ]] 00:06:16.654 18:13:28 json_config -- json_config/json_config.sh@294 -- # [[ 1 -eq 1 ]] 00:06:16.654 18:13:28 json_config -- json_config/json_config.sh@295 -- # create_nvmf_subsystem_config 00:06:16.654 18:13:28 json_config -- json_config/json_config.sh@234 -- # timing_enter create_nvmf_subsystem_config 00:06:16.654 18:13:28 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:16.654 18:13:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:16.654 18:13:28 json_config -- json_config/json_config.sh@236 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:06:16.654 18:13:28 json_config -- json_config/json_config.sh@237 -- # [[ tcp == \r\d\m\a ]] 00:06:16.654 18:13:28 json_config -- json_config/json_config.sh@241 -- # [[ -z 127.0.0.1 ]] 00:06:16.654 18:13:28 json_config -- json_config/json_config.sh@246 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:16.654 18:13:28 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:16.911 MallocForNvmf0 00:06:16.911 18:13:28 json_config -- json_config/json_config.sh@247 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:16.911 18:13:28 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:17.474 MallocForNvmf1 00:06:17.474 18:13:29 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:06:17.474 18:13:29 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:06:17.731 [2024-07-22 18:13:29.532841] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:17.731 18:13:29 json_config -- json_config/json_config.sh@250 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:17.731 18:13:29 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:17.987 18:13:29 json_config -- json_config/json_config.sh@251 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:17.988 18:13:29 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:18.244 18:13:30 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:18.244 18:13:30 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:18.501 18:13:30 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:18.501 18:13:30 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:18.757 [2024-07-22 18:13:30.633941] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:18.757 18:13:30 json_config -- json_config/json_config.sh@255 -- # timing_exit create_nvmf_subsystem_config 00:06:18.757 18:13:30 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:18.757 18:13:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:18.757 18:13:30 json_config -- json_config/json_config.sh@297 -- # timing_exit json_config_setup_target 00:06:18.757 18:13:30 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:18.757 18:13:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:18.757 18:13:30 json_config -- json_config/json_config.sh@299 -- # [[ 0 -eq 1 ]] 00:06:18.758 18:13:30 json_config -- json_config/json_config.sh@304 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:18.758 18:13:30 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:19.014 MallocBdevForConfigChangeCheck 00:06:19.014 18:13:31 json_config -- json_config/json_config.sh@306 -- # timing_exit json_config_test_init 00:06:19.014 18:13:31 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:19.014 18:13:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:19.270 18:13:31 json_config -- json_config/json_config.sh@363 -- # tgt_rpc save_config 00:06:19.270 18:13:31 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:19.526 INFO: shutting down applications... 00:06:19.526 18:13:31 json_config -- json_config/json_config.sh@365 -- # echo 'INFO: shutting down applications...' 00:06:19.526 18:13:31 json_config -- json_config/json_config.sh@366 -- # [[ 0 -eq 1 ]] 00:06:19.526 18:13:31 json_config -- json_config/json_config.sh@372 -- # json_config_clear target 00:06:19.526 18:13:31 json_config -- json_config/json_config.sh@336 -- # [[ -n 22 ]] 00:06:19.526 18:13:31 json_config -- json_config/json_config.sh@337 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:19.784 Calling clear_iscsi_subsystem 00:06:19.784 Calling clear_nvmf_subsystem 00:06:19.784 Calling clear_nbd_subsystem 00:06:19.784 Calling clear_ublk_subsystem 00:06:19.784 Calling clear_vhost_blk_subsystem 00:06:19.784 Calling clear_vhost_scsi_subsystem 00:06:19.784 Calling clear_bdev_subsystem 00:06:19.784 18:13:31 json_config -- json_config/json_config.sh@341 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:06:19.784 18:13:31 json_config -- json_config/json_config.sh@347 -- # count=100 00:06:19.784 18:13:31 json_config -- json_config/json_config.sh@348 -- # '[' 100 -gt 0 ']' 00:06:19.784 18:13:31 json_config -- json_config/json_config.sh@349 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:19.784 18:13:31 json_config -- json_config/json_config.sh@349 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:19.784 18:13:31 json_config -- json_config/json_config.sh@349 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:06:20.348 18:13:32 json_config -- json_config/json_config.sh@349 -- # break 00:06:20.348 18:13:32 json_config -- json_config/json_config.sh@354 -- # '[' 100 -eq 0 ']' 00:06:20.348 18:13:32 json_config -- json_config/json_config.sh@373 -- # json_config_test_shutdown_app target 00:06:20.348 18:13:32 json_config -- json_config/common.sh@31 -- # local app=target 00:06:20.348 18:13:32 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:20.348 18:13:32 json_config -- json_config/common.sh@35 -- # [[ -n 62234 ]] 00:06:20.348 18:13:32 json_config -- json_config/common.sh@38 -- # kill -SIGINT 62234 00:06:20.348 18:13:32 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:20.348 18:13:32 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:20.348 18:13:32 json_config -- json_config/common.sh@41 -- # kill -0 62234 00:06:20.348 18:13:32 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:20.606 18:13:32 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:20.606 18:13:32 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:20.606 18:13:32 json_config -- json_config/common.sh@41 -- # kill -0 62234 00:06:20.606 18:13:32 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:21.172 18:13:33 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:21.172 18:13:33 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:21.172 18:13:33 json_config -- json_config/common.sh@41 -- # kill -0 62234 00:06:21.172 18:13:33 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:21.172 18:13:33 json_config -- json_config/common.sh@43 -- # break 00:06:21.172 18:13:33 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:21.172 SPDK target shutdown done 00:06:21.172 18:13:33 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:21.172 INFO: relaunching applications... 00:06:21.172 18:13:33 json_config -- json_config/json_config.sh@375 -- # echo 'INFO: relaunching applications...' 00:06:21.172 18:13:33 json_config -- json_config/json_config.sh@376 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:21.172 18:13:33 json_config -- json_config/common.sh@9 -- # local app=target 00:06:21.172 18:13:33 json_config -- json_config/common.sh@10 -- # shift 00:06:21.172 18:13:33 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:21.172 18:13:33 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:21.172 18:13:33 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:21.172 18:13:33 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:21.172 18:13:33 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:21.172 18:13:33 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=62532 00:06:21.172 18:13:33 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:21.172 Waiting for target to run... 00:06:21.172 18:13:33 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:21.172 18:13:33 json_config -- json_config/common.sh@25 -- # waitforlisten 62532 /var/tmp/spdk_tgt.sock 00:06:21.172 18:13:33 json_config -- common/autotest_common.sh@829 -- # '[' -z 62532 ']' 00:06:21.172 18:13:33 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:21.172 18:13:33 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:21.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:21.172 18:13:33 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:21.172 18:13:33 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:21.172 18:13:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:21.429 [2024-07-22 18:13:33.249910] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:06:21.429 [2024-07-22 18:13:33.250110] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62532 ] 00:06:21.996 [2024-07-22 18:13:33.810991] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.254 [2024-07-22 18:13:34.032989] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.189 [2024-07-22 18:13:34.943173] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:23.189 [2024-07-22 18:13:34.975551] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:23.189 18:13:35 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:23.189 00:06:23.189 18:13:35 json_config -- common/autotest_common.sh@862 -- # return 0 00:06:23.189 18:13:35 json_config -- json_config/common.sh@26 -- # echo '' 00:06:23.190 18:13:35 json_config -- json_config/json_config.sh@377 -- # [[ 0 -eq 1 ]] 00:06:23.190 INFO: Checking if target configuration is the same... 00:06:23.190 18:13:35 json_config -- json_config/json_config.sh@381 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:23.190 18:13:35 json_config -- json_config/json_config.sh@382 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:23.190 18:13:35 json_config -- json_config/json_config.sh@382 -- # tgt_rpc save_config 00:06:23.190 18:13:35 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:23.190 + '[' 2 -ne 2 ']' 00:06:23.190 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:06:23.190 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:06:23.190 + rootdir=/home/vagrant/spdk_repo/spdk 00:06:23.190 +++ basename /dev/fd/62 00:06:23.190 ++ mktemp /tmp/62.XXX 00:06:23.190 + tmp_file_1=/tmp/62.vpO 00:06:23.190 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:23.190 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:23.190 + tmp_file_2=/tmp/spdk_tgt_config.json.MSm 00:06:23.190 + ret=0 00:06:23.190 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:23.454 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:23.728 + diff -u /tmp/62.vpO /tmp/spdk_tgt_config.json.MSm 00:06:23.728 INFO: JSON config files are the same 00:06:23.728 + echo 'INFO: JSON config files are the same' 00:06:23.728 + rm /tmp/62.vpO /tmp/spdk_tgt_config.json.MSm 00:06:23.728 + exit 0 00:06:23.728 18:13:35 json_config -- json_config/json_config.sh@383 -- # [[ 0 -eq 1 ]] 00:06:23.728 INFO: changing configuration and checking if this can be detected... 00:06:23.728 18:13:35 json_config -- json_config/json_config.sh@388 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:23.728 18:13:35 json_config -- json_config/json_config.sh@390 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:23.728 18:13:35 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:23.988 18:13:35 json_config -- json_config/json_config.sh@391 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:23.988 18:13:35 json_config -- json_config/json_config.sh@391 -- # tgt_rpc save_config 00:06:23.988 18:13:35 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:23.988 + '[' 2 -ne 2 ']' 00:06:23.988 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:06:23.988 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:06:23.988 + rootdir=/home/vagrant/spdk_repo/spdk 00:06:23.988 +++ basename /dev/fd/62 00:06:23.988 ++ mktemp /tmp/62.XXX 00:06:23.988 + tmp_file_1=/tmp/62.6nA 00:06:23.988 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:23.988 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:23.988 + tmp_file_2=/tmp/spdk_tgt_config.json.DHE 00:06:23.988 + ret=0 00:06:23.988 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:24.246 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:24.246 + diff -u /tmp/62.6nA /tmp/spdk_tgt_config.json.DHE 00:06:24.246 + ret=1 00:06:24.246 + echo '=== Start of file: /tmp/62.6nA ===' 00:06:24.246 + cat /tmp/62.6nA 00:06:24.246 + echo '=== End of file: /tmp/62.6nA ===' 00:06:24.246 + echo '' 00:06:24.246 + echo '=== Start of file: /tmp/spdk_tgt_config.json.DHE ===' 00:06:24.246 + cat /tmp/spdk_tgt_config.json.DHE 00:06:24.246 + echo '=== End of file: /tmp/spdk_tgt_config.json.DHE ===' 00:06:24.246 + echo '' 00:06:24.246 + rm /tmp/62.6nA /tmp/spdk_tgt_config.json.DHE 00:06:24.246 + exit 1 00:06:24.246 INFO: configuration change detected. 00:06:24.246 18:13:36 json_config -- json_config/json_config.sh@395 -- # echo 'INFO: configuration change detected.' 00:06:24.246 18:13:36 json_config -- json_config/json_config.sh@398 -- # json_config_test_fini 00:06:24.246 18:13:36 json_config -- json_config/json_config.sh@310 -- # timing_enter json_config_test_fini 00:06:24.246 18:13:36 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:24.246 18:13:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:24.246 18:13:36 json_config -- json_config/json_config.sh@311 -- # local ret=0 00:06:24.246 18:13:36 json_config -- json_config/json_config.sh@313 -- # [[ -n '' ]] 00:06:24.246 18:13:36 json_config -- json_config/json_config.sh@321 -- # [[ -n 62532 ]] 00:06:24.246 18:13:36 json_config -- json_config/json_config.sh@324 -- # cleanup_bdev_subsystem_config 00:06:24.246 18:13:36 json_config -- json_config/json_config.sh@188 -- # timing_enter cleanup_bdev_subsystem_config 00:06:24.246 18:13:36 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:24.246 18:13:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:24.246 18:13:36 json_config -- json_config/json_config.sh@190 -- # [[ 0 -eq 1 ]] 00:06:24.246 18:13:36 json_config -- json_config/json_config.sh@197 -- # uname -s 00:06:24.504 18:13:36 json_config -- json_config/json_config.sh@197 -- # [[ Linux = Linux ]] 00:06:24.504 18:13:36 json_config -- json_config/json_config.sh@198 -- # rm -f /sample_aio 00:06:24.504 18:13:36 json_config -- json_config/json_config.sh@201 -- # [[ 0 -eq 1 ]] 00:06:24.504 18:13:36 json_config -- json_config/json_config.sh@205 -- # timing_exit cleanup_bdev_subsystem_config 00:06:24.504 18:13:36 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:24.504 18:13:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:24.504 18:13:36 json_config -- json_config/json_config.sh@327 -- # killprocess 62532 00:06:24.504 18:13:36 json_config -- common/autotest_common.sh@948 -- # '[' -z 62532 ']' 00:06:24.504 18:13:36 json_config -- common/autotest_common.sh@952 -- # kill -0 62532 00:06:24.504 18:13:36 json_config -- common/autotest_common.sh@953 -- # uname 00:06:24.504 18:13:36 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:24.504 18:13:36 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62532 00:06:24.504 18:13:36 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:24.504 18:13:36 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:24.504 killing process with pid 62532 00:06:24.504 18:13:36 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62532' 00:06:24.504 18:13:36 json_config -- common/autotest_common.sh@967 -- # kill 62532 00:06:24.504 18:13:36 json_config -- common/autotest_common.sh@972 -- # wait 62532 00:06:25.439 18:13:37 json_config -- json_config/json_config.sh@330 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:25.439 18:13:37 json_config -- json_config/json_config.sh@331 -- # timing_exit json_config_test_fini 00:06:25.439 18:13:37 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:25.439 18:13:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:25.439 18:13:37 json_config -- json_config/json_config.sh@332 -- # return 0 00:06:25.439 INFO: Success 00:06:25.439 18:13:37 json_config -- json_config/json_config.sh@400 -- # echo 'INFO: Success' 00:06:25.439 00:06:25.439 real 0m11.544s 00:06:25.439 user 0m15.384s 00:06:25.439 sys 0m2.377s 00:06:25.439 18:13:37 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:25.439 18:13:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:25.439 ************************************ 00:06:25.439 END TEST json_config 00:06:25.439 ************************************ 00:06:25.439 18:13:37 -- common/autotest_common.sh@1142 -- # return 0 00:06:25.439 18:13:37 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:25.439 18:13:37 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:25.439 18:13:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:25.439 18:13:37 -- common/autotest_common.sh@10 -- # set +x 00:06:25.439 ************************************ 00:06:25.439 START TEST json_config_extra_key 00:06:25.439 ************************************ 00:06:25.439 18:13:37 json_config_extra_key -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:25.697 18:13:37 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:25.697 18:13:37 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:25.697 18:13:37 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:25.697 18:13:37 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:25.697 18:13:37 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:25.698 18:13:37 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:25.698 18:13:37 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:25.698 18:13:37 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:25.698 18:13:37 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:25.698 18:13:37 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:25.698 18:13:37 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:25.698 18:13:37 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:25.698 18:13:37 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:06:25.698 18:13:37 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=0b8484e2-e129-4a11-8748-0b3c728771da 00:06:25.698 18:13:37 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:25.698 18:13:37 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:25.698 18:13:37 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:25.698 18:13:37 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:25.698 18:13:37 json_config_extra_key -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:25.698 18:13:37 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:25.698 18:13:37 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:25.698 18:13:37 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:25.698 18:13:37 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.698 18:13:37 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.698 18:13:37 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.698 18:13:37 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:25.698 18:13:37 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.698 18:13:37 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:06:25.698 18:13:37 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:25.698 18:13:37 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:25.698 18:13:37 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:25.698 18:13:37 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:25.698 18:13:37 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:25.698 18:13:37 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:25.698 18:13:37 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:25.698 18:13:37 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:25.698 18:13:37 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:25.698 18:13:37 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:25.698 18:13:37 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:25.698 18:13:37 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:25.698 18:13:37 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:25.698 18:13:37 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:25.698 18:13:37 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:25.698 18:13:37 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:06:25.698 18:13:37 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:25.698 18:13:37 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:25.698 INFO: launching applications... 00:06:25.698 18:13:37 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:25.698 18:13:37 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:25.698 18:13:37 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:25.698 18:13:37 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:25.698 18:13:37 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:25.698 18:13:37 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:25.698 18:13:37 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:25.698 18:13:37 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:25.698 18:13:37 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:25.698 18:13:37 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=62728 00:06:25.698 18:13:37 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:25.698 Waiting for target to run... 00:06:25.698 18:13:37 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 62728 /var/tmp/spdk_tgt.sock 00:06:25.698 18:13:37 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 62728 ']' 00:06:25.698 18:13:37 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:25.698 18:13:37 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:25.698 18:13:37 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:25.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:25.698 18:13:37 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:25.698 18:13:37 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:25.698 18:13:37 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:25.698 [2024-07-22 18:13:37.654196] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:06:25.698 [2024-07-22 18:13:37.654435] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62728 ] 00:06:26.266 [2024-07-22 18:13:38.142864] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.524 [2024-07-22 18:13:38.368111] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.091 18:13:39 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:27.091 00:06:27.091 18:13:39 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:06:27.091 18:13:39 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:27.091 INFO: shutting down applications... 00:06:27.091 18:13:39 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:27.091 18:13:39 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:27.091 18:13:39 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:27.091 18:13:39 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:27.091 18:13:39 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 62728 ]] 00:06:27.091 18:13:39 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 62728 00:06:27.091 18:13:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:27.091 18:13:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:27.091 18:13:39 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 62728 00:06:27.091 18:13:39 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:27.658 18:13:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:27.658 18:13:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:27.658 18:13:39 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 62728 00:06:27.658 18:13:39 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:28.223 18:13:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:28.223 18:13:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:28.223 18:13:40 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 62728 00:06:28.223 18:13:40 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:28.790 18:13:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:28.790 18:13:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:28.790 18:13:40 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 62728 00:06:28.790 18:13:40 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:29.048 18:13:41 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:29.048 18:13:41 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:29.048 18:13:41 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 62728 00:06:29.048 18:13:41 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:29.615 18:13:41 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:29.615 18:13:41 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:29.615 18:13:41 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 62728 00:06:29.615 18:13:41 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:30.183 18:13:42 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:30.183 18:13:42 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:30.183 18:13:42 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 62728 00:06:30.183 18:13:42 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:30.183 18:13:42 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:30.183 18:13:42 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:30.183 SPDK target shutdown done 00:06:30.183 18:13:42 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:30.183 Success 00:06:30.183 18:13:42 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:30.183 00:06:30.183 real 0m4.610s 00:06:30.183 user 0m4.023s 00:06:30.183 sys 0m0.678s 00:06:30.183 18:13:42 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:30.183 18:13:42 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:30.183 ************************************ 00:06:30.183 END TEST json_config_extra_key 00:06:30.183 ************************************ 00:06:30.183 18:13:42 -- common/autotest_common.sh@1142 -- # return 0 00:06:30.183 18:13:42 -- spdk/autotest.sh@174 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:30.183 18:13:42 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:30.183 18:13:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:30.183 18:13:42 -- common/autotest_common.sh@10 -- # set +x 00:06:30.183 ************************************ 00:06:30.183 START TEST alias_rpc 00:06:30.183 ************************************ 00:06:30.183 18:13:42 alias_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:30.183 * Looking for test storage... 00:06:30.183 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:06:30.183 18:13:42 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:30.183 18:13:42 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=62848 00:06:30.183 18:13:42 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 62848 00:06:30.183 18:13:42 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:30.183 18:13:42 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 62848 ']' 00:06:30.183 18:13:42 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:30.183 18:13:42 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:30.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:30.183 18:13:42 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:30.183 18:13:42 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:30.183 18:13:42 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:30.442 [2024-07-22 18:13:42.302332] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:06:30.442 [2024-07-22 18:13:42.302529] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62848 ] 00:06:30.700 [2024-07-22 18:13:42.477270] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.959 [2024-07-22 18:13:42.775668] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.894 18:13:43 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:31.894 18:13:43 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:31.894 18:13:43 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:06:32.153 18:13:43 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 62848 00:06:32.153 18:13:43 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 62848 ']' 00:06:32.153 18:13:43 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 62848 00:06:32.153 18:13:43 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:06:32.153 18:13:43 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:32.153 18:13:43 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62848 00:06:32.153 18:13:43 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:32.153 killing process with pid 62848 00:06:32.153 18:13:43 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:32.153 18:13:43 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62848' 00:06:32.153 18:13:43 alias_rpc -- common/autotest_common.sh@967 -- # kill 62848 00:06:32.153 18:13:43 alias_rpc -- common/autotest_common.sh@972 -- # wait 62848 00:06:34.721 00:06:34.721 real 0m4.235s 00:06:34.721 user 0m4.334s 00:06:34.721 sys 0m0.666s 00:06:34.721 18:13:46 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:34.721 18:13:46 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:34.721 ************************************ 00:06:34.721 END TEST alias_rpc 00:06:34.721 ************************************ 00:06:34.721 18:13:46 -- common/autotest_common.sh@1142 -- # return 0 00:06:34.721 18:13:46 -- spdk/autotest.sh@176 -- # [[ 1 -eq 0 ]] 00:06:34.721 18:13:46 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:34.721 18:13:46 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:34.721 18:13:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:34.721 18:13:46 -- common/autotest_common.sh@10 -- # set +x 00:06:34.721 ************************************ 00:06:34.721 START TEST dpdk_mem_utility 00:06:34.721 ************************************ 00:06:34.721 18:13:46 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:34.721 * Looking for test storage... 00:06:34.721 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:06:34.721 18:13:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:34.721 18:13:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=62964 00:06:34.721 18:13:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:34.721 18:13:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 62964 00:06:34.721 18:13:46 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 62964 ']' 00:06:34.721 18:13:46 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:34.721 18:13:46 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:34.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:34.721 18:13:46 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:34.721 18:13:46 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:34.721 18:13:46 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:34.721 [2024-07-22 18:13:46.607128] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:06:34.721 [2024-07-22 18:13:46.607323] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62964 ] 00:06:34.980 [2024-07-22 18:13:46.784015] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.238 [2024-07-22 18:13:47.041011] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.175 18:13:47 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:36.175 18:13:47 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:06:36.175 18:13:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:36.175 18:13:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:36.175 18:13:47 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:36.175 18:13:47 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:36.175 { 00:06:36.175 "filename": "/tmp/spdk_mem_dump.txt" 00:06:36.175 } 00:06:36.175 18:13:47 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:36.175 18:13:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:36.175 DPDK memory size 820.000000 MiB in 1 heap(s) 00:06:36.175 1 heaps totaling size 820.000000 MiB 00:06:36.175 size: 820.000000 MiB heap id: 0 00:06:36.175 end heaps---------- 00:06:36.175 8 mempools totaling size 598.116089 MiB 00:06:36.175 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:36.175 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:36.175 size: 84.521057 MiB name: bdev_io_62964 00:06:36.175 size: 51.011292 MiB name: evtpool_62964 00:06:36.175 size: 50.003479 MiB name: msgpool_62964 00:06:36.175 size: 21.763794 MiB name: PDU_Pool 00:06:36.175 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:36.175 size: 0.026123 MiB name: Session_Pool 00:06:36.175 end mempools------- 00:06:36.175 6 memzones totaling size 4.142822 MiB 00:06:36.175 size: 1.000366 MiB name: RG_ring_0_62964 00:06:36.175 size: 1.000366 MiB name: RG_ring_1_62964 00:06:36.175 size: 1.000366 MiB name: RG_ring_4_62964 00:06:36.175 size: 1.000366 MiB name: RG_ring_5_62964 00:06:36.175 size: 0.125366 MiB name: RG_ring_2_62964 00:06:36.175 size: 0.015991 MiB name: RG_ring_3_62964 00:06:36.175 end memzones------- 00:06:36.175 18:13:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:06:36.175 heap id: 0 total size: 820.000000 MiB number of busy elements: 226 number of free elements: 18 00:06:36.175 list of free elements. size: 18.469727 MiB 00:06:36.175 element at address: 0x200000400000 with size: 1.999451 MiB 00:06:36.175 element at address: 0x200000800000 with size: 1.996887 MiB 00:06:36.175 element at address: 0x200007000000 with size: 1.995972 MiB 00:06:36.175 element at address: 0x20000b200000 with size: 1.995972 MiB 00:06:36.175 element at address: 0x200019100040 with size: 0.999939 MiB 00:06:36.175 element at address: 0x200019500040 with size: 0.999939 MiB 00:06:36.175 element at address: 0x200019600000 with size: 0.999329 MiB 00:06:36.175 element at address: 0x200003e00000 with size: 0.996094 MiB 00:06:36.175 element at address: 0x200032200000 with size: 0.994324 MiB 00:06:36.175 element at address: 0x200018e00000 with size: 0.959656 MiB 00:06:36.175 element at address: 0x200019900040 with size: 0.937256 MiB 00:06:36.175 element at address: 0x200000200000 with size: 0.834351 MiB 00:06:36.175 element at address: 0x20001b000000 with size: 0.568054 MiB 00:06:36.175 element at address: 0x200019200000 with size: 0.489441 MiB 00:06:36.175 element at address: 0x200019a00000 with size: 0.485413 MiB 00:06:36.175 element at address: 0x200013800000 with size: 0.468628 MiB 00:06:36.175 element at address: 0x200028400000 with size: 0.392883 MiB 00:06:36.175 element at address: 0x200003a00000 with size: 0.356140 MiB 00:06:36.175 list of standard malloc elements. size: 199.265869 MiB 00:06:36.175 element at address: 0x20000b3fef80 with size: 132.000183 MiB 00:06:36.175 element at address: 0x2000071fef80 with size: 64.000183 MiB 00:06:36.175 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:06:36.175 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:06:36.175 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:06:36.175 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:06:36.175 element at address: 0x2000199eff40 with size: 0.062683 MiB 00:06:36.175 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:06:36.175 element at address: 0x20000b1ff380 with size: 0.000366 MiB 00:06:36.175 element at address: 0x20000b1ff040 with size: 0.000305 MiB 00:06:36.175 element at address: 0x2000137ff040 with size: 0.000305 MiB 00:06:36.175 element at address: 0x2000002d5980 with size: 0.000244 MiB 00:06:36.175 element at address: 0x2000002d5a80 with size: 0.000244 MiB 00:06:36.175 element at address: 0x2000002d5b80 with size: 0.000244 MiB 00:06:36.175 element at address: 0x2000002d5c80 with size: 0.000244 MiB 00:06:36.175 element at address: 0x2000002d5d80 with size: 0.000244 MiB 00:06:36.175 element at address: 0x2000002d5e80 with size: 0.000244 MiB 00:06:36.175 element at address: 0x2000002d5f80 with size: 0.000244 MiB 00:06:36.175 element at address: 0x2000002d6080 with size: 0.000244 MiB 00:06:36.175 element at address: 0x2000002d6180 with size: 0.000244 MiB 00:06:36.175 element at address: 0x2000002d6280 with size: 0.000244 MiB 00:06:36.175 element at address: 0x2000002d6380 with size: 0.000244 MiB 00:06:36.175 element at address: 0x2000002d6480 with size: 0.000244 MiB 00:06:36.175 element at address: 0x2000002d6580 with size: 0.000244 MiB 00:06:36.175 element at address: 0x2000002d6680 with size: 0.000244 MiB 00:06:36.175 element at address: 0x2000002d6780 with size: 0.000244 MiB 00:06:36.175 element at address: 0x2000002d6880 with size: 0.000244 MiB 00:06:36.175 element at address: 0x2000002d6980 with size: 0.000244 MiB 00:06:36.175 element at address: 0x2000002d6a80 with size: 0.000244 MiB 00:06:36.175 element at address: 0x2000002d6d00 with size: 0.000244 MiB 00:06:36.175 element at address: 0x2000002d6e00 with size: 0.000244 MiB 00:06:36.175 element at address: 0x2000002d6f00 with size: 0.000244 MiB 00:06:36.175 element at address: 0x2000002d7000 with size: 0.000244 MiB 00:06:36.175 element at address: 0x2000002d7100 with size: 0.000244 MiB 00:06:36.175 element at address: 0x2000002d7200 with size: 0.000244 MiB 00:06:36.175 element at address: 0x2000002d7300 with size: 0.000244 MiB 00:06:36.175 element at address: 0x2000002d7400 with size: 0.000244 MiB 00:06:36.175 element at address: 0x2000002d7500 with size: 0.000244 MiB 00:06:36.175 element at address: 0x2000002d7600 with size: 0.000244 MiB 00:06:36.175 element at address: 0x2000002d7700 with size: 0.000244 MiB 00:06:36.175 element at address: 0x2000002d7800 with size: 0.000244 MiB 00:06:36.175 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:06:36.175 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:06:36.175 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:06:36.175 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:06:36.175 element at address: 0x200003aff980 with size: 0.000244 MiB 00:06:36.175 element at address: 0x200003affa80 with size: 0.000244 MiB 00:06:36.175 element at address: 0x200003eff000 with size: 0.000244 MiB 00:06:36.175 element at address: 0x20000b1ff180 with size: 0.000244 MiB 00:06:36.175 element at address: 0x20000b1ff280 with size: 0.000244 MiB 00:06:36.175 element at address: 0x20000b1ff500 with size: 0.000244 MiB 00:06:36.175 element at address: 0x20000b1ff600 with size: 0.000244 MiB 00:06:36.175 element at address: 0x20000b1ff700 with size: 0.000244 MiB 00:06:36.175 element at address: 0x20000b1ff800 with size: 0.000244 MiB 00:06:36.175 element at address: 0x20000b1ff900 with size: 0.000244 MiB 00:06:36.175 element at address: 0x20000b1ffa00 with size: 0.000244 MiB 00:06:36.175 element at address: 0x20000b1ffb00 with size: 0.000244 MiB 00:06:36.175 element at address: 0x20000b1ffc00 with size: 0.000244 MiB 00:06:36.175 element at address: 0x20000b1ffd00 with size: 0.000244 MiB 00:06:36.175 element at address: 0x20000b1ffe00 with size: 0.000244 MiB 00:06:36.175 element at address: 0x20000b1fff00 with size: 0.000244 MiB 00:06:36.175 element at address: 0x2000137ff180 with size: 0.000244 MiB 00:06:36.175 element at address: 0x2000137ff280 with size: 0.000244 MiB 00:06:36.175 element at address: 0x2000137ff380 with size: 0.000244 MiB 00:06:36.175 element at address: 0x2000137ff480 with size: 0.000244 MiB 00:06:36.175 element at address: 0x2000137ff580 with size: 0.000244 MiB 00:06:36.175 element at address: 0x2000137ff680 with size: 0.000244 MiB 00:06:36.175 element at address: 0x2000137ff780 with size: 0.000244 MiB 00:06:36.175 element at address: 0x2000137ff880 with size: 0.000244 MiB 00:06:36.175 element at address: 0x2000137ff980 with size: 0.000244 MiB 00:06:36.175 element at address: 0x2000137ffa80 with size: 0.000244 MiB 00:06:36.175 element at address: 0x2000137ffb80 with size: 0.000244 MiB 00:06:36.175 element at address: 0x2000137ffc80 with size: 0.000244 MiB 00:06:36.175 element at address: 0x2000137fff00 with size: 0.000244 MiB 00:06:36.175 element at address: 0x200013877f80 with size: 0.000244 MiB 00:06:36.175 element at address: 0x200013878080 with size: 0.000244 MiB 00:06:36.175 element at address: 0x200013878180 with size: 0.000244 MiB 00:06:36.175 element at address: 0x200013878280 with size: 0.000244 MiB 00:06:36.175 element at address: 0x200013878380 with size: 0.000244 MiB 00:06:36.175 element at address: 0x200013878480 with size: 0.000244 MiB 00:06:36.175 element at address: 0x200013878580 with size: 0.000244 MiB 00:06:36.175 element at address: 0x2000138f88c0 with size: 0.000244 MiB 00:06:36.175 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:06:36.175 element at address: 0x20001927d4c0 with size: 0.000244 MiB 00:06:36.175 element at address: 0x20001927d5c0 with size: 0.000244 MiB 00:06:36.175 element at address: 0x20001927d6c0 with size: 0.000244 MiB 00:06:36.175 element at address: 0x20001927d7c0 with size: 0.000244 MiB 00:06:36.176 element at address: 0x20001927d8c0 with size: 0.000244 MiB 00:06:36.176 element at address: 0x20001927d9c0 with size: 0.000244 MiB 00:06:36.176 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:06:36.176 element at address: 0x200019abc680 with size: 0.000244 MiB 00:06:36.176 element at address: 0x20001b0916c0 with size: 0.000244 MiB 00:06:36.176 element at address: 0x20001b0917c0 with size: 0.000244 MiB 00:06:36.176 element at address: 0x20001b0918c0 with size: 0.000244 MiB 00:06:36.176 element at address: 0x20001b0919c0 with size: 0.000244 MiB 00:06:36.176 element at address: 0x20001b091ac0 with size: 0.000244 MiB 00:06:36.176 element at address: 0x20001b091bc0 with size: 0.000244 MiB 00:06:36.176 element at address: 0x20001b091cc0 with size: 0.000244 MiB 00:06:36.176 element at address: 0x20001b091dc0 with size: 0.000244 MiB 00:06:36.176 element at address: 0x20001b091ec0 with size: 0.000244 MiB 00:06:36.176 element at address: 0x20001b091fc0 with size: 0.000244 MiB 00:06:36.176 element at address: 0x20001b0920c0 with size: 0.000244 MiB 00:06:36.176 element at address: 0x20001b0921c0 with size: 0.000244 MiB 00:06:36.176 element at address: 0x20001b0922c0 with size: 0.000244 MiB 00:06:36.176 element at address: 0x20001b0923c0 with size: 0.000244 MiB 00:06:36.176 element at address: 0x20001b0924c0 with size: 0.000244 MiB 00:06:36.176 element at address: 0x20001b0925c0 with size: 0.000244 MiB 00:06:36.176 element at address: 0x20001b0926c0 with size: 0.000244 MiB 00:06:36.176 element at address: 0x20001b0927c0 with size: 0.000244 MiB 00:06:36.176 element at address: 0x20001b0928c0 with size: 0.000244 MiB 00:06:36.176 element at address: 0x20001b0929c0 with size: 0.000244 MiB 00:06:36.176 element at address: 0x20001b092ac0 with size: 0.000244 MiB 00:06:36.176 element at address: 0x20001b092bc0 with size: 0.000244 MiB 00:06:36.176 element at address: 0x20001b092cc0 with size: 0.000244 MiB 00:06:36.176 element at address: 0x20001b092dc0 with size: 0.000244 MiB 00:06:36.176 element at address: 0x20001b092ec0 with size: 0.000244 MiB 00:06:36.176 element at address: 0x20001b092fc0 with size: 0.000244 MiB 00:06:36.176 element at address: 0x20001b0930c0 with size: 0.000244 MiB 00:06:36.176 element at address: 0x20001b0931c0 with size: 0.000244 MiB 00:06:36.176 element at address: 0x20001b0932c0 with size: 0.000244 MiB 00:06:36.176 element at address: 0x20001b0933c0 with size: 0.000244 MiB 00:06:36.176 element at address: 0x20001b0934c0 with size: 0.000244 MiB 00:06:36.176 element at address: 0x20001b0935c0 with size: 0.000244 MiB 00:06:36.176 element at address: 0x20001b0936c0 with size: 0.000244 MiB 00:06:36.176 element at address: 0x20001b0937c0 with size: 0.000244 MiB 00:06:36.176 element at address: 0x20001b0938c0 with size: 0.000244 MiB 00:06:36.176 element at address: 0x20001b0939c0 with size: 0.000244 MiB 00:06:36.176 element at address: 0x20001b093ac0 with size: 0.000244 MiB 00:06:36.176 element at address: 0x20001b093bc0 with size: 0.000244 MiB 00:06:36.176 element at address: 0x20001b093cc0 with size: 0.000244 MiB 00:06:36.176 element at address: 0x20001b093dc0 with size: 0.000244 MiB 00:06:36.176 element at address: 0x20001b093ec0 with size: 0.000244 MiB 00:06:36.176 element at address: 0x20001b093fc0 with size: 0.000244 MiB 00:06:36.176 element at address: 0x20001b0940c0 with size: 0.000244 MiB 00:06:36.176 element at address: 0x20001b0941c0 with size: 0.000244 MiB 00:06:36.176 element at address: 0x20001b0942c0 with size: 0.000244 MiB 00:06:36.176 element at address: 0x20001b0943c0 with size: 0.000244 MiB 00:06:36.176 element at address: 0x20001b0944c0 with size: 0.000244 MiB 00:06:36.176 element at address: 0x20001b0945c0 with size: 0.000244 MiB 00:06:36.176 element at address: 0x20001b0946c0 with size: 0.000244 MiB 00:06:36.176 element at address: 0x20001b0947c0 with size: 0.000244 MiB 00:06:36.176 element at address: 0x20001b0948c0 with size: 0.000244 MiB 00:06:36.176 element at address: 0x20001b0949c0 with size: 0.000244 MiB 00:06:36.176 element at address: 0x20001b094ac0 with size: 0.000244 MiB 00:06:36.176 element at address: 0x20001b094bc0 with size: 0.000244 MiB 00:06:36.176 element at address: 0x20001b094cc0 with size: 0.000244 MiB 00:06:36.176 element at address: 0x20001b094dc0 with size: 0.000244 MiB 00:06:36.176 element at address: 0x20001b094ec0 with size: 0.000244 MiB 00:06:36.176 element at address: 0x20001b094fc0 with size: 0.000244 MiB 00:06:36.176 element at address: 0x20001b0950c0 with size: 0.000244 MiB 00:06:36.176 element at address: 0x20001b0951c0 with size: 0.000244 MiB 00:06:36.176 element at address: 0x20001b0952c0 with size: 0.000244 MiB 00:06:36.176 element at address: 0x20001b0953c0 with size: 0.000244 MiB 00:06:36.176 element at address: 0x200028464940 with size: 0.000244 MiB 00:06:36.176 element at address: 0x200028464a40 with size: 0.000244 MiB 00:06:36.176 element at address: 0x20002846b700 with size: 0.000244 MiB 00:06:36.176 element at address: 0x20002846b980 with size: 0.000244 MiB 00:06:36.176 element at address: 0x20002846ba80 with size: 0.000244 MiB 00:06:36.176 element at address: 0x20002846bb80 with size: 0.000244 MiB 00:06:36.176 element at address: 0x20002846bc80 with size: 0.000244 MiB 00:06:36.176 element at address: 0x20002846bd80 with size: 0.000244 MiB 00:06:36.176 element at address: 0x20002846be80 with size: 0.000244 MiB 00:06:36.176 element at address: 0x20002846bf80 with size: 0.000244 MiB 00:06:36.176 element at address: 0x20002846c080 with size: 0.000244 MiB 00:06:36.176 element at address: 0x20002846c180 with size: 0.000244 MiB 00:06:36.176 element at address: 0x20002846c280 with size: 0.000244 MiB 00:06:36.176 element at address: 0x20002846c380 with size: 0.000244 MiB 00:06:36.176 element at address: 0x20002846c480 with size: 0.000244 MiB 00:06:36.176 element at address: 0x20002846c580 with size: 0.000244 MiB 00:06:36.176 element at address: 0x20002846c680 with size: 0.000244 MiB 00:06:36.176 element at address: 0x20002846c780 with size: 0.000244 MiB 00:06:36.176 element at address: 0x20002846c880 with size: 0.000244 MiB 00:06:36.176 element at address: 0x20002846c980 with size: 0.000244 MiB 00:06:36.176 element at address: 0x20002846ca80 with size: 0.000244 MiB 00:06:36.176 element at address: 0x20002846cb80 with size: 0.000244 MiB 00:06:36.176 element at address: 0x20002846cc80 with size: 0.000244 MiB 00:06:36.176 element at address: 0x20002846cd80 with size: 0.000244 MiB 00:06:36.176 element at address: 0x20002846ce80 with size: 0.000244 MiB 00:06:36.176 element at address: 0x20002846cf80 with size: 0.000244 MiB 00:06:36.176 element at address: 0x20002846d080 with size: 0.000244 MiB 00:06:36.176 element at address: 0x20002846d180 with size: 0.000244 MiB 00:06:36.176 element at address: 0x20002846d280 with size: 0.000244 MiB 00:06:36.176 element at address: 0x20002846d380 with size: 0.000244 MiB 00:06:36.176 element at address: 0x20002846d480 with size: 0.000244 MiB 00:06:36.176 element at address: 0x20002846d580 with size: 0.000244 MiB 00:06:36.176 element at address: 0x20002846d680 with size: 0.000244 MiB 00:06:36.176 element at address: 0x20002846d780 with size: 0.000244 MiB 00:06:36.176 element at address: 0x20002846d880 with size: 0.000244 MiB 00:06:36.176 element at address: 0x20002846d980 with size: 0.000244 MiB 00:06:36.176 element at address: 0x20002846da80 with size: 0.000244 MiB 00:06:36.176 element at address: 0x20002846db80 with size: 0.000244 MiB 00:06:36.176 element at address: 0x20002846dc80 with size: 0.000244 MiB 00:06:36.176 element at address: 0x20002846dd80 with size: 0.000244 MiB 00:06:36.176 element at address: 0x20002846de80 with size: 0.000244 MiB 00:06:36.176 element at address: 0x20002846df80 with size: 0.000244 MiB 00:06:36.176 element at address: 0x20002846e080 with size: 0.000244 MiB 00:06:36.176 element at address: 0x20002846e180 with size: 0.000244 MiB 00:06:36.176 element at address: 0x20002846e280 with size: 0.000244 MiB 00:06:36.176 element at address: 0x20002846e380 with size: 0.000244 MiB 00:06:36.176 element at address: 0x20002846e480 with size: 0.000244 MiB 00:06:36.176 element at address: 0x20002846e580 with size: 0.000244 MiB 00:06:36.176 element at address: 0x20002846e680 with size: 0.000244 MiB 00:06:36.176 element at address: 0x20002846e780 with size: 0.000244 MiB 00:06:36.176 element at address: 0x20002846e880 with size: 0.000244 MiB 00:06:36.176 element at address: 0x20002846e980 with size: 0.000244 MiB 00:06:36.176 element at address: 0x20002846ea80 with size: 0.000244 MiB 00:06:36.176 element at address: 0x20002846eb80 with size: 0.000244 MiB 00:06:36.176 element at address: 0x20002846ec80 with size: 0.000244 MiB 00:06:36.176 element at address: 0x20002846ed80 with size: 0.000244 MiB 00:06:36.176 element at address: 0x20002846ee80 with size: 0.000244 MiB 00:06:36.176 element at address: 0x20002846ef80 with size: 0.000244 MiB 00:06:36.176 element at address: 0x20002846f080 with size: 0.000244 MiB 00:06:36.176 element at address: 0x20002846f180 with size: 0.000244 MiB 00:06:36.176 element at address: 0x20002846f280 with size: 0.000244 MiB 00:06:36.176 element at address: 0x20002846f380 with size: 0.000244 MiB 00:06:36.176 element at address: 0x20002846f480 with size: 0.000244 MiB 00:06:36.176 element at address: 0x20002846f580 with size: 0.000244 MiB 00:06:36.176 element at address: 0x20002846f680 with size: 0.000244 MiB 00:06:36.176 element at address: 0x20002846f780 with size: 0.000244 MiB 00:06:36.176 element at address: 0x20002846f880 with size: 0.000244 MiB 00:06:36.176 element at address: 0x20002846f980 with size: 0.000244 MiB 00:06:36.176 element at address: 0x20002846fa80 with size: 0.000244 MiB 00:06:36.176 element at address: 0x20002846fb80 with size: 0.000244 MiB 00:06:36.176 element at address: 0x20002846fc80 with size: 0.000244 MiB 00:06:36.176 element at address: 0x20002846fd80 with size: 0.000244 MiB 00:06:36.176 element at address: 0x20002846fe80 with size: 0.000244 MiB 00:06:36.176 list of memzone associated elements. size: 602.264404 MiB 00:06:36.176 element at address: 0x20001b0954c0 with size: 211.416809 MiB 00:06:36.176 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:36.176 element at address: 0x20002846ff80 with size: 157.562622 MiB 00:06:36.176 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:36.176 element at address: 0x2000139fab40 with size: 84.020691 MiB 00:06:36.177 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_62964_0 00:06:36.177 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:06:36.177 associated memzone info: size: 48.002930 MiB name: MP_evtpool_62964_0 00:06:36.177 element at address: 0x200003fff340 with size: 48.003113 MiB 00:06:36.177 associated memzone info: size: 48.002930 MiB name: MP_msgpool_62964_0 00:06:36.177 element at address: 0x200019bbe900 with size: 20.255615 MiB 00:06:36.177 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:36.177 element at address: 0x2000323feb00 with size: 18.005127 MiB 00:06:36.177 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:36.177 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:06:36.177 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_62964 00:06:36.177 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:06:36.177 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_62964 00:06:36.177 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:06:36.177 associated memzone info: size: 1.007996 MiB name: MP_evtpool_62964 00:06:36.177 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:06:36.177 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:36.177 element at address: 0x200019abc780 with size: 1.008179 MiB 00:06:36.177 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:36.177 element at address: 0x200018efde00 with size: 1.008179 MiB 00:06:36.177 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:36.177 element at address: 0x2000138f89c0 with size: 1.008179 MiB 00:06:36.177 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:36.177 element at address: 0x200003eff100 with size: 1.000549 MiB 00:06:36.177 associated memzone info: size: 1.000366 MiB name: RG_ring_0_62964 00:06:36.177 element at address: 0x200003affb80 with size: 1.000549 MiB 00:06:36.177 associated memzone info: size: 1.000366 MiB name: RG_ring_1_62964 00:06:36.177 element at address: 0x2000196ffd40 with size: 1.000549 MiB 00:06:36.177 associated memzone info: size: 1.000366 MiB name: RG_ring_4_62964 00:06:36.177 element at address: 0x2000322fe8c0 with size: 1.000549 MiB 00:06:36.177 associated memzone info: size: 1.000366 MiB name: RG_ring_5_62964 00:06:36.177 element at address: 0x200003a5b2c0 with size: 0.500549 MiB 00:06:36.177 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_62964 00:06:36.177 element at address: 0x20001927dac0 with size: 0.500549 MiB 00:06:36.177 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:36.177 element at address: 0x200013878680 with size: 0.500549 MiB 00:06:36.177 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:36.177 element at address: 0x200019a7c440 with size: 0.250549 MiB 00:06:36.177 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:36.177 element at address: 0x200003adf740 with size: 0.125549 MiB 00:06:36.177 associated memzone info: size: 0.125366 MiB name: RG_ring_2_62964 00:06:36.177 element at address: 0x200018ef5ac0 with size: 0.031799 MiB 00:06:36.177 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:36.177 element at address: 0x200028464b40 with size: 0.023804 MiB 00:06:36.177 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:36.177 element at address: 0x200003adb500 with size: 0.016174 MiB 00:06:36.177 associated memzone info: size: 0.015991 MiB name: RG_ring_3_62964 00:06:36.177 element at address: 0x20002846acc0 with size: 0.002502 MiB 00:06:36.177 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:36.177 element at address: 0x2000002d6b80 with size: 0.000366 MiB 00:06:36.177 associated memzone info: size: 0.000183 MiB name: MP_msgpool_62964 00:06:36.177 element at address: 0x2000137ffd80 with size: 0.000366 MiB 00:06:36.177 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_62964 00:06:36.177 element at address: 0x20002846b800 with size: 0.000366 MiB 00:06:36.177 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:36.177 18:13:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:36.177 18:13:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 62964 00:06:36.177 18:13:48 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 62964 ']' 00:06:36.177 18:13:48 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 62964 00:06:36.177 18:13:48 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:06:36.177 18:13:48 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:36.177 18:13:48 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62964 00:06:36.177 18:13:48 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:36.177 18:13:48 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:36.177 killing process with pid 62964 00:06:36.177 18:13:48 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62964' 00:06:36.177 18:13:48 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 62964 00:06:36.177 18:13:48 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 62964 00:06:38.710 00:06:38.710 real 0m4.052s 00:06:38.710 user 0m3.999s 00:06:38.710 sys 0m0.628s 00:06:38.710 18:13:50 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:38.710 ************************************ 00:06:38.710 END TEST dpdk_mem_utility 00:06:38.710 ************************************ 00:06:38.710 18:13:50 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:38.710 18:13:50 -- common/autotest_common.sh@1142 -- # return 0 00:06:38.710 18:13:50 -- spdk/autotest.sh@181 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:38.710 18:13:50 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:38.710 18:13:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:38.710 18:13:50 -- common/autotest_common.sh@10 -- # set +x 00:06:38.710 ************************************ 00:06:38.710 START TEST event 00:06:38.710 ************************************ 00:06:38.710 18:13:50 event -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:38.710 * Looking for test storage... 00:06:38.710 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:38.710 18:13:50 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:38.710 18:13:50 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:38.710 18:13:50 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:38.710 18:13:50 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:38.710 18:13:50 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:38.711 18:13:50 event -- common/autotest_common.sh@10 -- # set +x 00:06:38.711 ************************************ 00:06:38.711 START TEST event_perf 00:06:38.711 ************************************ 00:06:38.711 18:13:50 event.event_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:38.711 Running I/O for 1 seconds...[2024-07-22 18:13:50.653636] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:06:38.711 [2024-07-22 18:13:50.653798] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63082 ] 00:06:38.970 [2024-07-22 18:13:50.823883] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:39.228 [2024-07-22 18:13:51.087001] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:39.228 [2024-07-22 18:13:51.087166] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:39.228 Running I/O for 1 seconds...[2024-07-22 18:13:51.087543] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.228 [2024-07-22 18:13:51.087551] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:40.650 00:06:40.650 lcore 0: 124322 00:06:40.650 lcore 1: 124320 00:06:40.650 lcore 2: 124322 00:06:40.650 lcore 3: 124322 00:06:40.650 done. 00:06:40.650 00:06:40.650 real 0m1.955s 00:06:40.650 user 0m4.668s 00:06:40.650 sys 0m0.149s 00:06:40.650 18:13:52 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:40.650 18:13:52 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:40.650 ************************************ 00:06:40.650 END TEST event_perf 00:06:40.650 ************************************ 00:06:40.650 18:13:52 event -- common/autotest_common.sh@1142 -- # return 0 00:06:40.650 18:13:52 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:40.650 18:13:52 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:40.650 18:13:52 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:40.650 18:13:52 event -- common/autotest_common.sh@10 -- # set +x 00:06:40.650 ************************************ 00:06:40.650 START TEST event_reactor 00:06:40.650 ************************************ 00:06:40.650 18:13:52 event.event_reactor -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:40.650 [2024-07-22 18:13:52.646433] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:06:40.650 [2024-07-22 18:13:52.647569] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63127 ] 00:06:40.908 [2024-07-22 18:13:52.818251] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.167 [2024-07-22 18:13:53.074425] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.538 test_start 00:06:42.538 oneshot 00:06:42.538 tick 100 00:06:42.538 tick 100 00:06:42.538 tick 250 00:06:42.538 tick 100 00:06:42.538 tick 100 00:06:42.538 tick 100 00:06:42.538 tick 250 00:06:42.538 tick 500 00:06:42.538 tick 100 00:06:42.538 tick 100 00:06:42.538 tick 250 00:06:42.538 tick 100 00:06:42.538 tick 100 00:06:42.538 test_end 00:06:42.538 00:06:42.538 real 0m1.877s 00:06:42.538 user 0m1.648s 00:06:42.538 sys 0m0.117s 00:06:42.538 18:13:54 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:42.538 18:13:54 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:42.538 ************************************ 00:06:42.538 END TEST event_reactor 00:06:42.538 ************************************ 00:06:42.538 18:13:54 event -- common/autotest_common.sh@1142 -- # return 0 00:06:42.538 18:13:54 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:42.538 18:13:54 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:42.538 18:13:54 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:42.538 18:13:54 event -- common/autotest_common.sh@10 -- # set +x 00:06:42.538 ************************************ 00:06:42.538 START TEST event_reactor_perf 00:06:42.538 ************************************ 00:06:42.538 18:13:54 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:42.797 [2024-07-22 18:13:54.577638] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:06:42.797 [2024-07-22 18:13:54.577867] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63169 ] 00:06:42.797 [2024-07-22 18:13:54.757348] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.055 [2024-07-22 18:13:55.006126] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.429 test_start 00:06:44.429 test_end 00:06:44.429 Performance: 255469 events per second 00:06:44.429 00:06:44.429 real 0m1.898s 00:06:44.429 user 0m1.653s 00:06:44.429 sys 0m0.131s 00:06:44.429 18:13:56 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:44.429 ************************************ 00:06:44.429 END TEST event_reactor_perf 00:06:44.429 ************************************ 00:06:44.429 18:13:56 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:44.687 18:13:56 event -- common/autotest_common.sh@1142 -- # return 0 00:06:44.687 18:13:56 event -- event/event.sh@49 -- # uname -s 00:06:44.687 18:13:56 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:44.687 18:13:56 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:44.687 18:13:56 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:44.687 18:13:56 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:44.687 18:13:56 event -- common/autotest_common.sh@10 -- # set +x 00:06:44.687 ************************************ 00:06:44.687 START TEST event_scheduler 00:06:44.687 ************************************ 00:06:44.687 18:13:56 event.event_scheduler -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:44.687 * Looking for test storage... 00:06:44.687 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:44.687 18:13:56 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:44.687 18:13:56 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=63233 00:06:44.687 18:13:56 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:44.687 18:13:56 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:44.687 18:13:56 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 63233 00:06:44.687 18:13:56 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 63233 ']' 00:06:44.687 18:13:56 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:44.687 18:13:56 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:44.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:44.687 18:13:56 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:44.687 18:13:56 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:44.687 18:13:56 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:44.687 [2024-07-22 18:13:56.677156] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:06:44.687 [2024-07-22 18:13:56.677358] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63233 ] 00:06:44.945 [2024-07-22 18:13:56.842185] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:45.222 [2024-07-22 18:13:57.126279] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.222 [2024-07-22 18:13:57.126451] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:45.222 [2024-07-22 18:13:57.127126] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:45.222 [2024-07-22 18:13:57.127303] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:45.805 18:13:57 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:45.806 18:13:57 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:06:45.806 18:13:57 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:45.806 18:13:57 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:45.806 18:13:57 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:45.806 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:45.806 POWER: Cannot set governor of lcore 0 to userspace 00:06:45.806 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:45.806 POWER: Cannot set governor of lcore 0 to performance 00:06:45.806 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:45.806 POWER: Cannot set governor of lcore 0 to userspace 00:06:45.806 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:45.806 POWER: Cannot set governor of lcore 0 to userspace 00:06:45.806 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:06:45.806 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:45.806 POWER: Unable to set Power Management Environment for lcore 0 00:06:45.806 [2024-07-22 18:13:57.725167] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:06:45.806 [2024-07-22 18:13:57.725200] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:06:45.806 [2024-07-22 18:13:57.725228] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:06:45.806 [2024-07-22 18:13:57.725276] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:45.806 [2024-07-22 18:13:57.725592] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:45.806 [2024-07-22 18:13:57.725615] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:45.806 18:13:57 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:45.806 18:13:57 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:45.806 18:13:57 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:45.806 18:13:57 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:46.064 [2024-07-22 18:13:58.080183] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:46.323 18:13:58 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.323 18:13:58 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:46.323 18:13:58 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:46.323 18:13:58 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:46.323 18:13:58 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:46.323 ************************************ 00:06:46.323 START TEST scheduler_create_thread 00:06:46.323 ************************************ 00:06:46.323 18:13:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:06:46.323 18:13:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:46.323 18:13:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.323 18:13:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.323 2 00:06:46.323 18:13:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.323 18:13:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:46.323 18:13:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.323 18:13:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.323 3 00:06:46.323 18:13:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.323 18:13:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:46.323 18:13:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.323 18:13:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.323 4 00:06:46.323 18:13:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.323 18:13:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:46.323 18:13:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.323 18:13:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.323 5 00:06:46.323 18:13:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.323 18:13:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:46.323 18:13:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.323 18:13:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.323 6 00:06:46.323 18:13:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.323 18:13:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:46.323 18:13:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.323 18:13:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.323 7 00:06:46.323 18:13:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.323 18:13:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:46.323 18:13:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.323 18:13:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.323 8 00:06:46.323 18:13:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.323 18:13:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:46.323 18:13:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.323 18:13:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.323 9 00:06:46.323 18:13:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.323 18:13:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:46.323 18:13:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.323 18:13:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.323 10 00:06:46.323 18:13:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.323 18:13:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:46.323 18:13:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.323 18:13:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.323 18:13:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.323 18:13:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:46.323 18:13:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:46.323 18:13:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.323 18:13:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.323 18:13:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.323 18:13:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:46.323 18:13:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.323 18:13:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:47.257 18:13:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:47.257 18:13:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:47.257 18:13:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:47.257 18:13:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:47.257 18:13:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:48.631 ************************************ 00:06:48.631 END TEST scheduler_create_thread 00:06:48.631 ************************************ 00:06:48.631 18:14:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:48.631 00:06:48.631 real 0m2.139s 00:06:48.631 user 0m0.020s 00:06:48.631 sys 0m0.004s 00:06:48.631 18:14:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:48.631 18:14:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:48.631 18:14:00 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:06:48.631 18:14:00 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:48.631 18:14:00 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 63233 00:06:48.631 18:14:00 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 63233 ']' 00:06:48.631 18:14:00 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 63233 00:06:48.631 18:14:00 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:06:48.631 18:14:00 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:48.631 18:14:00 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63233 00:06:48.631 killing process with pid 63233 00:06:48.631 18:14:00 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:06:48.631 18:14:00 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:06:48.631 18:14:00 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63233' 00:06:48.631 18:14:00 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 63233 00:06:48.631 18:14:00 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 63233 00:06:48.890 [2024-07-22 18:14:00.711409] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:50.264 00:06:50.264 real 0m5.437s 00:06:50.264 user 0m9.005s 00:06:50.264 sys 0m0.563s 00:06:50.264 18:14:01 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:50.264 18:14:01 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:50.264 ************************************ 00:06:50.264 END TEST event_scheduler 00:06:50.264 ************************************ 00:06:50.264 18:14:01 event -- common/autotest_common.sh@1142 -- # return 0 00:06:50.264 18:14:01 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:50.264 18:14:01 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:50.264 18:14:01 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:50.264 18:14:01 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:50.264 18:14:01 event -- common/autotest_common.sh@10 -- # set +x 00:06:50.264 ************************************ 00:06:50.264 START TEST app_repeat 00:06:50.264 ************************************ 00:06:50.264 18:14:01 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:06:50.264 18:14:01 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:50.264 18:14:01 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:50.264 18:14:01 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:50.264 18:14:01 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:50.264 18:14:01 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:50.264 18:14:01 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:50.264 18:14:01 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:50.264 Process app_repeat pid: 63361 00:06:50.264 spdk_app_start Round 0 00:06:50.264 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:50.264 18:14:01 event.app_repeat -- event/event.sh@19 -- # repeat_pid=63361 00:06:50.264 18:14:01 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:50.264 18:14:01 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 63361' 00:06:50.264 18:14:01 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:50.264 18:14:01 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:50.264 18:14:01 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:50.264 18:14:01 event.app_repeat -- event/event.sh@25 -- # waitforlisten 63361 /var/tmp/spdk-nbd.sock 00:06:50.264 18:14:01 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 63361 ']' 00:06:50.264 18:14:01 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:50.264 18:14:01 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:50.264 18:14:01 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:50.264 18:14:01 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:50.264 18:14:01 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:50.264 [2024-07-22 18:14:02.045948] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:06:50.264 [2024-07-22 18:14:02.046180] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63361 ] 00:06:50.264 [2024-07-22 18:14:02.222891] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:50.522 [2024-07-22 18:14:02.534768] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.522 [2024-07-22 18:14:02.534785] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:51.088 18:14:02 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:51.088 18:14:02 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:51.088 18:14:03 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:51.653 Malloc0 00:06:51.653 18:14:03 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:51.910 Malloc1 00:06:51.910 18:14:03 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:51.910 18:14:03 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:51.910 18:14:03 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:51.910 18:14:03 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:51.910 18:14:03 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:51.910 18:14:03 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:51.910 18:14:03 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:51.910 18:14:03 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:51.910 18:14:03 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:51.910 18:14:03 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:51.910 18:14:03 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:51.910 18:14:03 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:51.910 18:14:03 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:51.910 18:14:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:51.910 18:14:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:51.910 18:14:03 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:52.168 /dev/nbd0 00:06:52.168 18:14:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:52.168 18:14:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:52.168 18:14:04 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:52.168 18:14:04 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:52.168 18:14:04 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:52.168 18:14:04 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:52.168 18:14:04 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:52.168 18:14:04 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:52.168 18:14:04 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:52.168 18:14:04 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:52.168 18:14:04 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:52.168 1+0 records in 00:06:52.168 1+0 records out 00:06:52.168 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000421325 s, 9.7 MB/s 00:06:52.168 18:14:04 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:52.168 18:14:04 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:52.168 18:14:04 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:52.168 18:14:04 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:52.168 18:14:04 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:52.168 18:14:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:52.168 18:14:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:52.168 18:14:04 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:52.426 /dev/nbd1 00:06:52.426 18:14:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:52.426 18:14:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:52.426 18:14:04 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:52.426 18:14:04 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:52.426 18:14:04 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:52.426 18:14:04 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:52.426 18:14:04 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:52.426 18:14:04 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:52.426 18:14:04 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:52.426 18:14:04 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:52.426 18:14:04 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:52.426 1+0 records in 00:06:52.426 1+0 records out 00:06:52.426 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000529461 s, 7.7 MB/s 00:06:52.426 18:14:04 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:52.426 18:14:04 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:52.426 18:14:04 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:52.426 18:14:04 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:52.426 18:14:04 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:52.426 18:14:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:52.426 18:14:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:52.426 18:14:04 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:52.426 18:14:04 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:52.426 18:14:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:52.689 18:14:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:52.689 { 00:06:52.689 "bdev_name": "Malloc0", 00:06:52.689 "nbd_device": "/dev/nbd0" 00:06:52.689 }, 00:06:52.689 { 00:06:52.689 "bdev_name": "Malloc1", 00:06:52.689 "nbd_device": "/dev/nbd1" 00:06:52.689 } 00:06:52.689 ]' 00:06:52.689 18:14:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:52.689 { 00:06:52.689 "bdev_name": "Malloc0", 00:06:52.689 "nbd_device": "/dev/nbd0" 00:06:52.689 }, 00:06:52.689 { 00:06:52.689 "bdev_name": "Malloc1", 00:06:52.689 "nbd_device": "/dev/nbd1" 00:06:52.689 } 00:06:52.689 ]' 00:06:52.689 18:14:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:52.949 18:14:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:52.949 /dev/nbd1' 00:06:52.949 18:14:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:52.949 /dev/nbd1' 00:06:52.949 18:14:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:52.949 18:14:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:52.949 18:14:04 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:52.949 18:14:04 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:52.949 18:14:04 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:52.949 18:14:04 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:52.949 18:14:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:52.949 18:14:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:52.949 18:14:04 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:52.949 18:14:04 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:52.949 18:14:04 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:52.949 18:14:04 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:52.949 256+0 records in 00:06:52.949 256+0 records out 00:06:52.949 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00619064 s, 169 MB/s 00:06:52.949 18:14:04 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:52.949 18:14:04 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:52.949 256+0 records in 00:06:52.949 256+0 records out 00:06:52.949 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0321329 s, 32.6 MB/s 00:06:52.949 18:14:04 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:52.949 18:14:04 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:52.949 256+0 records in 00:06:52.949 256+0 records out 00:06:52.949 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.032725 s, 32.0 MB/s 00:06:52.949 18:14:04 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:52.949 18:14:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:52.949 18:14:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:52.949 18:14:04 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:52.949 18:14:04 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:52.949 18:14:04 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:52.949 18:14:04 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:52.949 18:14:04 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:52.949 18:14:04 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:52.949 18:14:04 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:52.949 18:14:04 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:52.949 18:14:04 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:52.949 18:14:04 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:52.949 18:14:04 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:52.949 18:14:04 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:52.949 18:14:04 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:52.949 18:14:04 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:52.949 18:14:04 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:52.949 18:14:04 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:53.207 18:14:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:53.207 18:14:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:53.207 18:14:05 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:53.207 18:14:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:53.207 18:14:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:53.207 18:14:05 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:53.207 18:14:05 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:53.207 18:14:05 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:53.207 18:14:05 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:53.207 18:14:05 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:53.465 18:14:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:53.465 18:14:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:53.465 18:14:05 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:53.465 18:14:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:53.465 18:14:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:53.465 18:14:05 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:53.465 18:14:05 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:53.465 18:14:05 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:53.465 18:14:05 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:53.465 18:14:05 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:53.465 18:14:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:53.723 18:14:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:53.723 18:14:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:53.723 18:14:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:53.723 18:14:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:53.723 18:14:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:53.723 18:14:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:53.723 18:14:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:53.723 18:14:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:53.723 18:14:05 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:53.723 18:14:05 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:53.723 18:14:05 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:53.723 18:14:05 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:53.723 18:14:05 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:54.289 18:14:06 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:55.662 [2024-07-22 18:14:07.295029] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:55.662 [2024-07-22 18:14:07.522456] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:55.662 [2024-07-22 18:14:07.522464] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.921 [2024-07-22 18:14:07.716217] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:55.921 [2024-07-22 18:14:07.716367] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:57.297 18:14:09 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:57.297 spdk_app_start Round 1 00:06:57.297 18:14:09 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:57.297 18:14:09 event.app_repeat -- event/event.sh@25 -- # waitforlisten 63361 /var/tmp/spdk-nbd.sock 00:06:57.297 18:14:09 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 63361 ']' 00:06:57.297 18:14:09 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:57.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:57.297 18:14:09 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:57.297 18:14:09 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:57.297 18:14:09 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:57.297 18:14:09 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:57.555 18:14:09 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:57.555 18:14:09 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:57.555 18:14:09 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:57.813 Malloc0 00:06:57.813 18:14:09 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:58.093 Malloc1 00:06:58.093 18:14:09 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:58.093 18:14:09 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:58.093 18:14:09 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:58.093 18:14:09 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:58.093 18:14:09 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:58.093 18:14:09 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:58.093 18:14:09 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:58.093 18:14:09 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:58.093 18:14:09 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:58.093 18:14:09 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:58.093 18:14:09 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:58.093 18:14:09 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:58.093 18:14:09 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:58.093 18:14:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:58.093 18:14:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:58.093 18:14:09 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:58.351 /dev/nbd0 00:06:58.351 18:14:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:58.351 18:14:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:58.351 18:14:10 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:58.351 18:14:10 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:58.351 18:14:10 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:58.351 18:14:10 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:58.351 18:14:10 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:58.351 18:14:10 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:58.351 18:14:10 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:58.351 18:14:10 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:58.351 18:14:10 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:58.351 1+0 records in 00:06:58.351 1+0 records out 00:06:58.351 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000514625 s, 8.0 MB/s 00:06:58.351 18:14:10 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:58.351 18:14:10 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:58.351 18:14:10 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:58.351 18:14:10 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:58.351 18:14:10 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:58.351 18:14:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:58.351 18:14:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:58.351 18:14:10 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:58.611 /dev/nbd1 00:06:58.611 18:14:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:58.611 18:14:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:58.611 18:14:10 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:58.611 18:14:10 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:58.611 18:14:10 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:58.611 18:14:10 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:58.611 18:14:10 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:58.611 18:14:10 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:58.611 18:14:10 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:58.611 18:14:10 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:58.611 18:14:10 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:58.611 1+0 records in 00:06:58.611 1+0 records out 00:06:58.611 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00047789 s, 8.6 MB/s 00:06:58.611 18:14:10 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:58.869 18:14:10 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:58.869 18:14:10 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:58.869 18:14:10 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:58.869 18:14:10 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:58.869 18:14:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:58.869 18:14:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:58.869 18:14:10 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:58.869 18:14:10 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:58.869 18:14:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:58.869 18:14:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:58.869 { 00:06:58.869 "bdev_name": "Malloc0", 00:06:58.869 "nbd_device": "/dev/nbd0" 00:06:58.869 }, 00:06:58.869 { 00:06:58.869 "bdev_name": "Malloc1", 00:06:58.869 "nbd_device": "/dev/nbd1" 00:06:58.869 } 00:06:58.869 ]' 00:06:58.869 18:14:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:58.869 { 00:06:58.869 "bdev_name": "Malloc0", 00:06:58.869 "nbd_device": "/dev/nbd0" 00:06:58.869 }, 00:06:58.869 { 00:06:58.869 "bdev_name": "Malloc1", 00:06:58.869 "nbd_device": "/dev/nbd1" 00:06:58.869 } 00:06:58.869 ]' 00:06:58.869 18:14:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:59.128 18:14:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:59.128 /dev/nbd1' 00:06:59.128 18:14:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:59.128 /dev/nbd1' 00:06:59.128 18:14:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:59.128 18:14:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:59.128 18:14:10 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:59.128 18:14:10 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:59.128 18:14:10 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:59.128 18:14:10 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:59.128 18:14:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:59.128 18:14:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:59.128 18:14:10 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:59.128 18:14:10 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:59.128 18:14:10 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:59.128 18:14:10 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:59.128 256+0 records in 00:06:59.128 256+0 records out 00:06:59.128 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00896164 s, 117 MB/s 00:06:59.128 18:14:10 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:59.128 18:14:10 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:59.128 256+0 records in 00:06:59.128 256+0 records out 00:06:59.128 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0303927 s, 34.5 MB/s 00:06:59.128 18:14:10 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:59.128 18:14:10 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:59.128 256+0 records in 00:06:59.128 256+0 records out 00:06:59.128 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0378256 s, 27.7 MB/s 00:06:59.128 18:14:11 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:59.128 18:14:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:59.128 18:14:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:59.128 18:14:11 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:59.128 18:14:11 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:59.128 18:14:11 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:59.128 18:14:11 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:59.128 18:14:11 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:59.128 18:14:11 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:59.128 18:14:11 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:59.128 18:14:11 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:59.128 18:14:11 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:59.128 18:14:11 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:59.128 18:14:11 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:59.128 18:14:11 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:59.128 18:14:11 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:59.128 18:14:11 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:59.128 18:14:11 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:59.128 18:14:11 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:59.386 18:14:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:59.386 18:14:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:59.386 18:14:11 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:59.386 18:14:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:59.386 18:14:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:59.386 18:14:11 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:59.386 18:14:11 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:59.386 18:14:11 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:59.386 18:14:11 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:59.386 18:14:11 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:59.644 18:14:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:59.644 18:14:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:59.644 18:14:11 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:59.644 18:14:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:59.644 18:14:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:59.644 18:14:11 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:59.644 18:14:11 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:59.644 18:14:11 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:59.644 18:14:11 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:59.644 18:14:11 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:59.644 18:14:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:59.903 18:14:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:59.903 18:14:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:59.903 18:14:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:00.160 18:14:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:00.160 18:14:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:00.160 18:14:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:00.160 18:14:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:00.160 18:14:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:00.160 18:14:11 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:00.160 18:14:11 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:00.160 18:14:11 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:00.160 18:14:11 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:00.160 18:14:11 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:00.418 18:14:12 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:01.846 [2024-07-22 18:14:13.538903] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:01.846 [2024-07-22 18:14:13.774382] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.846 [2024-07-22 18:14:13.774385] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:02.104 [2024-07-22 18:14:13.963093] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:02.104 [2024-07-22 18:14:13.963195] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:03.480 spdk_app_start Round 2 00:07:03.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:03.480 18:14:15 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:03.480 18:14:15 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:07:03.480 18:14:15 event.app_repeat -- event/event.sh@25 -- # waitforlisten 63361 /var/tmp/spdk-nbd.sock 00:07:03.480 18:14:15 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 63361 ']' 00:07:03.480 18:14:15 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:03.480 18:14:15 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:03.480 18:14:15 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:03.480 18:14:15 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:03.480 18:14:15 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:03.742 18:14:15 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:03.742 18:14:15 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:07:03.742 18:14:15 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:04.010 Malloc0 00:07:04.010 18:14:15 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:04.576 Malloc1 00:07:04.576 18:14:16 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:04.576 18:14:16 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:04.576 18:14:16 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:04.576 18:14:16 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:04.576 18:14:16 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:04.576 18:14:16 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:04.576 18:14:16 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:04.576 18:14:16 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:04.576 18:14:16 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:04.576 18:14:16 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:04.576 18:14:16 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:04.576 18:14:16 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:04.576 18:14:16 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:04.576 18:14:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:04.576 18:14:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:04.576 18:14:16 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:04.576 /dev/nbd0 00:07:04.835 18:14:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:04.835 18:14:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:04.835 18:14:16 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:07:04.835 18:14:16 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:07:04.835 18:14:16 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:04.835 18:14:16 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:04.835 18:14:16 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:07:04.835 18:14:16 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:07:04.835 18:14:16 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:04.835 18:14:16 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:04.835 18:14:16 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:04.835 1+0 records in 00:07:04.835 1+0 records out 00:07:04.835 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000243187 s, 16.8 MB/s 00:07:04.835 18:14:16 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:04.835 18:14:16 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:07:04.835 18:14:16 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:04.835 18:14:16 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:04.835 18:14:16 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:07:04.835 18:14:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:04.835 18:14:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:04.835 18:14:16 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:04.835 /dev/nbd1 00:07:05.093 18:14:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:05.093 18:14:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:05.093 18:14:16 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:07:05.093 18:14:16 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:07:05.093 18:14:16 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:05.093 18:14:16 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:05.094 18:14:16 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:07:05.094 18:14:16 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:07:05.094 18:14:16 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:05.094 18:14:16 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:05.094 18:14:16 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:05.094 1+0 records in 00:07:05.094 1+0 records out 00:07:05.094 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000290272 s, 14.1 MB/s 00:07:05.094 18:14:16 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:05.094 18:14:16 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:07:05.094 18:14:16 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:05.094 18:14:16 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:05.094 18:14:16 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:07:05.094 18:14:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:05.094 18:14:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:05.094 18:14:16 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:05.094 18:14:16 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:05.094 18:14:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:05.353 18:14:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:05.353 { 00:07:05.353 "bdev_name": "Malloc0", 00:07:05.353 "nbd_device": "/dev/nbd0" 00:07:05.353 }, 00:07:05.353 { 00:07:05.353 "bdev_name": "Malloc1", 00:07:05.353 "nbd_device": "/dev/nbd1" 00:07:05.353 } 00:07:05.353 ]' 00:07:05.353 18:14:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:05.353 { 00:07:05.353 "bdev_name": "Malloc0", 00:07:05.353 "nbd_device": "/dev/nbd0" 00:07:05.353 }, 00:07:05.353 { 00:07:05.353 "bdev_name": "Malloc1", 00:07:05.353 "nbd_device": "/dev/nbd1" 00:07:05.353 } 00:07:05.353 ]' 00:07:05.353 18:14:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:05.353 18:14:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:05.353 /dev/nbd1' 00:07:05.353 18:14:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:05.353 18:14:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:05.353 /dev/nbd1' 00:07:05.353 18:14:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:05.353 18:14:17 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:05.353 18:14:17 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:05.353 18:14:17 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:05.353 18:14:17 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:05.353 18:14:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:05.353 18:14:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:05.353 18:14:17 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:05.353 18:14:17 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:05.353 18:14:17 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:05.353 18:14:17 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:05.353 256+0 records in 00:07:05.353 256+0 records out 00:07:05.353 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00815546 s, 129 MB/s 00:07:05.353 18:14:17 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:05.353 18:14:17 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:05.353 256+0 records in 00:07:05.353 256+0 records out 00:07:05.353 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.033133 s, 31.6 MB/s 00:07:05.353 18:14:17 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:05.353 18:14:17 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:05.353 256+0 records in 00:07:05.353 256+0 records out 00:07:05.353 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0354219 s, 29.6 MB/s 00:07:05.353 18:14:17 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:05.353 18:14:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:05.353 18:14:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:05.353 18:14:17 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:05.353 18:14:17 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:05.353 18:14:17 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:05.353 18:14:17 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:05.353 18:14:17 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:05.353 18:14:17 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:05.353 18:14:17 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:05.353 18:14:17 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:05.353 18:14:17 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:05.353 18:14:17 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:05.353 18:14:17 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:05.353 18:14:17 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:05.353 18:14:17 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:05.353 18:14:17 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:05.353 18:14:17 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:05.353 18:14:17 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:05.611 18:14:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:05.611 18:14:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:05.611 18:14:17 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:05.611 18:14:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:05.611 18:14:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:05.611 18:14:17 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:05.611 18:14:17 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:05.611 18:14:17 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:05.611 18:14:17 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:05.611 18:14:17 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:05.870 18:14:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:05.870 18:14:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:05.870 18:14:17 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:05.870 18:14:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:05.870 18:14:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:05.870 18:14:17 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:05.870 18:14:17 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:05.870 18:14:17 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:05.870 18:14:17 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:05.870 18:14:17 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:05.870 18:14:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:06.128 18:14:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:06.128 18:14:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:06.128 18:14:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:06.387 18:14:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:06.387 18:14:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:06.387 18:14:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:06.387 18:14:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:06.387 18:14:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:06.387 18:14:18 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:06.387 18:14:18 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:06.387 18:14:18 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:06.387 18:14:18 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:06.387 18:14:18 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:06.953 18:14:18 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:08.329 [2024-07-22 18:14:19.981961] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:08.329 [2024-07-22 18:14:20.243931] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:08.329 [2024-07-22 18:14:20.243937] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.587 [2024-07-22 18:14:20.448739] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:08.587 [2024-07-22 18:14:20.448922] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:09.961 18:14:21 event.app_repeat -- event/event.sh@38 -- # waitforlisten 63361 /var/tmp/spdk-nbd.sock 00:07:09.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:09.961 18:14:21 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 63361 ']' 00:07:09.961 18:14:21 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:09.961 18:14:21 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:09.961 18:14:21 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:09.961 18:14:21 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:09.961 18:14:21 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:09.961 18:14:21 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:09.961 18:14:21 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:07:09.961 18:14:21 event.app_repeat -- event/event.sh@39 -- # killprocess 63361 00:07:09.961 18:14:21 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 63361 ']' 00:07:09.961 18:14:21 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 63361 00:07:09.961 18:14:21 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:07:09.961 18:14:21 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:09.961 18:14:21 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63361 00:07:09.961 killing process with pid 63361 00:07:09.961 18:14:21 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:09.962 18:14:21 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:09.962 18:14:21 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63361' 00:07:09.962 18:14:21 event.app_repeat -- common/autotest_common.sh@967 -- # kill 63361 00:07:09.962 18:14:21 event.app_repeat -- common/autotest_common.sh@972 -- # wait 63361 00:07:11.336 spdk_app_start is called in Round 0. 00:07:11.336 Shutdown signal received, stop current app iteration 00:07:11.336 Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 reinitialization... 00:07:11.336 spdk_app_start is called in Round 1. 00:07:11.336 Shutdown signal received, stop current app iteration 00:07:11.336 Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 reinitialization... 00:07:11.336 spdk_app_start is called in Round 2. 00:07:11.336 Shutdown signal received, stop current app iteration 00:07:11.336 Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 reinitialization... 00:07:11.336 spdk_app_start is called in Round 3. 00:07:11.336 Shutdown signal received, stop current app iteration 00:07:11.336 18:14:23 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:11.336 18:14:23 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:11.336 00:07:11.336 real 0m21.174s 00:07:11.336 user 0m44.837s 00:07:11.336 sys 0m3.374s 00:07:11.336 18:14:23 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:11.336 18:14:23 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:11.336 ************************************ 00:07:11.336 END TEST app_repeat 00:07:11.336 ************************************ 00:07:11.336 18:14:23 event -- common/autotest_common.sh@1142 -- # return 0 00:07:11.336 18:14:23 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:11.336 18:14:23 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:11.336 18:14:23 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:11.336 18:14:23 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:11.336 18:14:23 event -- common/autotest_common.sh@10 -- # set +x 00:07:11.336 ************************************ 00:07:11.336 START TEST cpu_locks 00:07:11.336 ************************************ 00:07:11.336 18:14:23 event.cpu_locks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:11.336 * Looking for test storage... 00:07:11.336 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:07:11.336 18:14:23 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:11.336 18:14:23 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:11.336 18:14:23 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:11.336 18:14:23 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:11.336 18:14:23 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:11.336 18:14:23 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:11.336 18:14:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:11.336 ************************************ 00:07:11.336 START TEST default_locks 00:07:11.336 ************************************ 00:07:11.336 18:14:23 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:07:11.336 18:14:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=64015 00:07:11.336 18:14:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 64015 00:07:11.336 18:14:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:11.336 18:14:23 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 64015 ']' 00:07:11.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:11.336 18:14:23 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:11.336 18:14:23 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:11.337 18:14:23 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:11.337 18:14:23 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:11.337 18:14:23 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:11.595 [2024-07-22 18:14:23.488214] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:11.595 [2024-07-22 18:14:23.488426] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64015 ] 00:07:11.853 [2024-07-22 18:14:23.667232] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.112 [2024-07-22 18:14:23.905717] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.700 18:14:24 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:12.700 18:14:24 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:07:12.700 18:14:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 64015 00:07:12.700 18:14:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 64015 00:07:12.700 18:14:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:13.267 18:14:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 64015 00:07:13.267 18:14:25 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 64015 ']' 00:07:13.267 18:14:25 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 64015 00:07:13.267 18:14:25 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:07:13.267 18:14:25 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:13.267 18:14:25 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64015 00:07:13.267 18:14:25 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:13.267 killing process with pid 64015 00:07:13.267 18:14:25 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:13.267 18:14:25 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64015' 00:07:13.267 18:14:25 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 64015 00:07:13.267 18:14:25 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 64015 00:07:15.797 18:14:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 64015 00:07:15.797 18:14:27 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:07:15.797 18:14:27 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 64015 00:07:15.797 18:14:27 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:15.797 18:14:27 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:15.797 18:14:27 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:15.797 18:14:27 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:15.797 18:14:27 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 64015 00:07:15.797 18:14:27 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 64015 ']' 00:07:15.797 18:14:27 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:15.797 18:14:27 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:15.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:15.797 18:14:27 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:15.797 18:14:27 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:15.797 18:14:27 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:15.797 ERROR: process (pid: 64015) is no longer running 00:07:15.797 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (64015) - No such process 00:07:15.797 18:14:27 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:15.797 18:14:27 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:07:15.797 18:14:27 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:07:15.797 18:14:27 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:15.797 18:14:27 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:15.797 18:14:27 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:15.797 18:14:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:15.797 18:14:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:15.798 18:14:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:15.798 18:14:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:15.798 00:07:15.798 real 0m4.074s 00:07:15.798 user 0m4.096s 00:07:15.798 sys 0m0.763s 00:07:15.798 18:14:27 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:15.798 ************************************ 00:07:15.798 END TEST default_locks 00:07:15.798 ************************************ 00:07:15.798 18:14:27 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:15.798 18:14:27 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:15.798 18:14:27 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:15.798 18:14:27 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:15.798 18:14:27 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:15.798 18:14:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:15.798 ************************************ 00:07:15.798 START TEST default_locks_via_rpc 00:07:15.798 ************************************ 00:07:15.798 18:14:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:07:15.798 18:14:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=64102 00:07:15.798 18:14:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 64102 00:07:15.798 18:14:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 64102 ']' 00:07:15.798 18:14:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:15.798 18:14:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:15.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:15.798 18:14:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:15.798 18:14:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:15.798 18:14:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:15.798 18:14:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.798 [2024-07-22 18:14:27.543817] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:15.798 [2024-07-22 18:14:27.544001] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64102 ] 00:07:15.798 [2024-07-22 18:14:27.707252] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.056 [2024-07-22 18:14:27.950907] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.989 18:14:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:16.989 18:14:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:16.989 18:14:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:16.989 18:14:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:16.989 18:14:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:16.989 18:14:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:16.989 18:14:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:16.989 18:14:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:16.989 18:14:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:16.989 18:14:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:16.989 18:14:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:16.989 18:14:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:16.989 18:14:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:16.989 18:14:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:16.989 18:14:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 64102 00:07:16.989 18:14:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 64102 00:07:16.989 18:14:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:17.248 18:14:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 64102 00:07:17.248 18:14:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 64102 ']' 00:07:17.248 18:14:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 64102 00:07:17.248 18:14:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:07:17.248 18:14:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:17.248 18:14:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64102 00:07:17.248 18:14:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:17.248 killing process with pid 64102 00:07:17.248 18:14:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:17.248 18:14:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64102' 00:07:17.248 18:14:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 64102 00:07:17.248 18:14:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 64102 00:07:19.785 00:07:19.785 real 0m3.938s 00:07:19.785 user 0m3.904s 00:07:19.785 sys 0m0.742s 00:07:19.785 ************************************ 00:07:19.785 END TEST default_locks_via_rpc 00:07:19.785 ************************************ 00:07:19.785 18:14:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:19.785 18:14:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:19.785 18:14:31 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:19.785 18:14:31 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:19.785 18:14:31 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:19.785 18:14:31 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:19.785 18:14:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:19.785 ************************************ 00:07:19.785 START TEST non_locking_app_on_locked_coremask 00:07:19.785 ************************************ 00:07:19.785 18:14:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:07:19.785 18:14:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=64194 00:07:19.785 18:14:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 64194 /var/tmp/spdk.sock 00:07:19.785 18:14:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:19.785 18:14:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 64194 ']' 00:07:19.785 18:14:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:19.785 18:14:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:19.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:19.785 18:14:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:19.785 18:14:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:19.785 18:14:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:19.785 [2024-07-22 18:14:31.558276] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:19.785 [2024-07-22 18:14:31.558485] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64194 ] 00:07:19.785 [2024-07-22 18:14:31.729930] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.044 [2024-07-22 18:14:31.959027] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.978 18:14:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:20.978 18:14:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:20.978 18:14:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=64222 00:07:20.978 18:14:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:20.978 18:14:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 64222 /var/tmp/spdk2.sock 00:07:20.978 18:14:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 64222 ']' 00:07:20.978 18:14:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:20.978 18:14:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:20.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:20.978 18:14:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:20.978 18:14:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:20.978 18:14:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:20.978 [2024-07-22 18:14:32.875749] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:20.978 [2024-07-22 18:14:32.875971] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64222 ] 00:07:21.248 [2024-07-22 18:14:33.047868] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:21.248 [2024-07-22 18:14:33.047949] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.519 [2024-07-22 18:14:33.530325] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.421 18:14:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:23.421 18:14:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:23.421 18:14:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 64194 00:07:23.421 18:14:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 64194 00:07:23.421 18:14:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:23.987 18:14:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 64194 00:07:23.987 18:14:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 64194 ']' 00:07:23.987 18:14:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 64194 00:07:23.987 18:14:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:23.987 18:14:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:23.987 18:14:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64194 00:07:23.987 18:14:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:23.987 killing process with pid 64194 00:07:23.987 18:14:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:23.987 18:14:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64194' 00:07:23.987 18:14:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 64194 00:07:23.987 18:14:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 64194 00:07:29.250 18:14:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 64222 00:07:29.250 18:14:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 64222 ']' 00:07:29.250 18:14:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 64222 00:07:29.250 18:14:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:29.250 18:14:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:29.250 18:14:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64222 00:07:29.250 18:14:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:29.250 killing process with pid 64222 00:07:29.250 18:14:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:29.250 18:14:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64222' 00:07:29.250 18:14:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 64222 00:07:29.250 18:14:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 64222 00:07:30.625 00:07:30.625 real 0m11.201s 00:07:30.625 user 0m11.413s 00:07:30.625 sys 0m1.499s 00:07:30.625 18:14:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:30.625 18:14:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:30.625 ************************************ 00:07:30.625 END TEST non_locking_app_on_locked_coremask 00:07:30.625 ************************************ 00:07:30.883 18:14:42 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:30.883 18:14:42 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:30.883 18:14:42 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:30.883 18:14:42 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:30.883 18:14:42 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:30.883 ************************************ 00:07:30.883 START TEST locking_app_on_unlocked_coremask 00:07:30.883 ************************************ 00:07:30.883 18:14:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:07:30.883 18:14:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=64375 00:07:30.883 18:14:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 64375 /var/tmp/spdk.sock 00:07:30.883 18:14:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 64375 ']' 00:07:30.883 18:14:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:30.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:30.883 18:14:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:30.883 18:14:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:30.883 18:14:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:30.883 18:14:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:30.883 18:14:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:30.883 [2024-07-22 18:14:42.823639] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:30.883 [2024-07-22 18:14:42.823849] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64375 ] 00:07:31.141 [2024-07-22 18:14:43.001502] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:31.141 [2024-07-22 18:14:43.001571] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.406 [2024-07-22 18:14:43.242607] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.347 18:14:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:32.347 18:14:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:32.347 18:14:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=64408 00:07:32.347 18:14:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 64408 /var/tmp/spdk2.sock 00:07:32.347 18:14:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:32.347 18:14:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 64408 ']' 00:07:32.347 18:14:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:32.347 18:14:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:32.347 18:14:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:32.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:32.347 18:14:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:32.347 18:14:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:32.347 [2024-07-22 18:14:44.166636] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:32.347 [2024-07-22 18:14:44.166830] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64408 ] 00:07:32.347 [2024-07-22 18:14:44.343769] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.913 [2024-07-22 18:14:44.815955] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.850 18:14:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:34.850 18:14:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:34.850 18:14:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 64408 00:07:34.850 18:14:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 64408 00:07:34.850 18:14:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:35.417 18:14:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 64375 00:07:35.417 18:14:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 64375 ']' 00:07:35.417 18:14:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 64375 00:07:35.417 18:14:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:35.417 18:14:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:35.417 18:14:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64375 00:07:35.417 18:14:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:35.417 18:14:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:35.417 killing process with pid 64375 00:07:35.417 18:14:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64375' 00:07:35.417 18:14:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 64375 00:07:35.417 18:14:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 64375 00:07:40.683 18:14:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 64408 00:07:40.683 18:14:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 64408 ']' 00:07:40.683 18:14:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 64408 00:07:40.683 18:14:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:40.683 18:14:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:40.683 18:14:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64408 00:07:40.683 18:14:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:40.683 killing process with pid 64408 00:07:40.683 18:14:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:40.683 18:14:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64408' 00:07:40.683 18:14:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 64408 00:07:40.683 18:14:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 64408 00:07:42.582 00:07:42.582 real 0m11.860s 00:07:42.582 user 0m12.048s 00:07:42.582 sys 0m1.516s 00:07:42.582 ************************************ 00:07:42.582 END TEST locking_app_on_unlocked_coremask 00:07:42.582 ************************************ 00:07:42.582 18:14:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:42.582 18:14:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:42.582 18:14:54 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:42.582 18:14:54 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:42.582 18:14:54 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:42.582 18:14:54 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:42.582 18:14:54 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:42.841 ************************************ 00:07:42.841 START TEST locking_app_on_locked_coremask 00:07:42.841 ************************************ 00:07:42.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:42.841 18:14:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:07:42.841 18:14:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=64566 00:07:42.841 18:14:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 64566 /var/tmp/spdk.sock 00:07:42.841 18:14:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 64566 ']' 00:07:42.841 18:14:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:42.841 18:14:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:42.841 18:14:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:42.841 18:14:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:42.841 18:14:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:42.841 18:14:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:42.841 [2024-07-22 18:14:54.722917] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:42.841 [2024-07-22 18:14:54.723093] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64566 ] 00:07:43.099 [2024-07-22 18:14:54.891477] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.357 [2024-07-22 18:14:55.155480] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.294 18:14:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:44.294 18:14:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:44.294 18:14:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:44.294 18:14:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=64600 00:07:44.294 18:14:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 64600 /var/tmp/spdk2.sock 00:07:44.294 18:14:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:07:44.294 18:14:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 64600 /var/tmp/spdk2.sock 00:07:44.294 18:14:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:44.294 18:14:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:44.294 18:14:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:44.294 18:14:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:44.294 18:14:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 64600 /var/tmp/spdk2.sock 00:07:44.294 18:14:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 64600 ']' 00:07:44.294 18:14:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:44.294 18:14:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:44.294 18:14:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:44.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:44.294 18:14:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:44.294 18:14:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:44.294 [2024-07-22 18:14:56.165214] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:44.294 [2024-07-22 18:14:56.165420] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64600 ] 00:07:44.640 [2024-07-22 18:14:56.346627] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 64566 has claimed it. 00:07:44.640 [2024-07-22 18:14:56.346725] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:44.898 ERROR: process (pid: 64600) is no longer running 00:07:44.898 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (64600) - No such process 00:07:44.898 18:14:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:44.898 18:14:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:07:44.898 18:14:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:07:44.898 18:14:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:44.898 18:14:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:44.898 18:14:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:44.898 18:14:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 64566 00:07:44.898 18:14:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 64566 00:07:44.899 18:14:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:45.465 18:14:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 64566 00:07:45.465 18:14:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 64566 ']' 00:07:45.465 18:14:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 64566 00:07:45.465 18:14:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:45.466 18:14:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:45.466 18:14:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64566 00:07:45.466 18:14:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:45.466 18:14:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:45.466 killing process with pid 64566 00:07:45.466 18:14:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64566' 00:07:45.466 18:14:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 64566 00:07:45.466 18:14:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 64566 00:07:47.997 00:07:47.997 real 0m5.273s 00:07:47.997 user 0m5.507s 00:07:47.997 sys 0m1.006s 00:07:47.997 ************************************ 00:07:47.997 18:14:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:47.997 18:14:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:47.997 END TEST locking_app_on_locked_coremask 00:07:47.997 ************************************ 00:07:47.997 18:14:59 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:47.997 18:14:59 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:47.997 18:14:59 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:47.997 18:14:59 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:47.997 18:14:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:47.997 ************************************ 00:07:47.997 START TEST locking_overlapped_coremask 00:07:47.997 ************************************ 00:07:47.997 18:14:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:07:47.997 18:14:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=64681 00:07:47.997 18:14:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 64681 /var/tmp/spdk.sock 00:07:47.997 18:14:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 64681 ']' 00:07:47.997 18:14:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:47.997 18:14:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:07:47.997 18:14:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:47.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:47.998 18:14:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:47.998 18:14:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:47.998 18:14:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:48.257 [2024-07-22 18:15:00.076237] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:48.257 [2024-07-22 18:15:00.076477] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64681 ] 00:07:48.257 [2024-07-22 18:15:00.253251] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:48.515 [2024-07-22 18:15:00.531226] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:48.515 [2024-07-22 18:15:00.531310] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.515 [2024-07-22 18:15:00.531333] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:49.452 18:15:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:49.453 18:15:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:49.453 18:15:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=64711 00:07:49.453 18:15:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:49.453 18:15:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 64711 /var/tmp/spdk2.sock 00:07:49.453 18:15:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:07:49.453 18:15:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 64711 /var/tmp/spdk2.sock 00:07:49.453 18:15:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:49.453 18:15:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:49.453 18:15:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:49.453 18:15:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:49.453 18:15:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 64711 /var/tmp/spdk2.sock 00:07:49.453 18:15:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 64711 ']' 00:07:49.453 18:15:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:49.453 18:15:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:49.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:49.453 18:15:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:49.453 18:15:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:49.453 18:15:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:49.712 [2024-07-22 18:15:01.554902] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:49.712 [2024-07-22 18:15:01.555106] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64711 ] 00:07:49.982 [2024-07-22 18:15:01.739737] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 64681 has claimed it. 00:07:49.982 [2024-07-22 18:15:01.739854] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:50.241 ERROR: process (pid: 64711) is no longer running 00:07:50.241 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (64711) - No such process 00:07:50.241 18:15:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:50.241 18:15:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:07:50.241 18:15:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:07:50.241 18:15:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:50.241 18:15:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:50.241 18:15:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:50.241 18:15:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:50.241 18:15:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:50.241 18:15:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:50.241 18:15:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:50.241 18:15:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 64681 00:07:50.241 18:15:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 64681 ']' 00:07:50.241 18:15:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 64681 00:07:50.241 18:15:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:07:50.241 18:15:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:50.241 18:15:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64681 00:07:50.241 killing process with pid 64681 00:07:50.241 18:15:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:50.241 18:15:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:50.241 18:15:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64681' 00:07:50.241 18:15:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 64681 00:07:50.241 18:15:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 64681 00:07:52.774 00:07:52.774 real 0m4.785s 00:07:52.774 user 0m12.197s 00:07:52.774 sys 0m0.869s 00:07:52.774 18:15:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:52.774 18:15:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:52.774 ************************************ 00:07:52.774 END TEST locking_overlapped_coremask 00:07:52.774 ************************************ 00:07:52.774 18:15:04 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:52.774 18:15:04 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:52.774 18:15:04 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:52.774 18:15:04 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:52.774 18:15:04 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:52.774 ************************************ 00:07:52.774 START TEST locking_overlapped_coremask_via_rpc 00:07:52.774 ************************************ 00:07:52.774 18:15:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:07:52.774 18:15:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=64781 00:07:52.774 18:15:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 64781 /var/tmp/spdk.sock 00:07:52.774 18:15:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 64781 ']' 00:07:52.774 18:15:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:52.774 18:15:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:52.774 18:15:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:52.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:52.774 18:15:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:52.774 18:15:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:52.774 18:15:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:53.032 [2024-07-22 18:15:04.927061] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:53.032 [2024-07-22 18:15:04.927270] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64781 ] 00:07:53.290 [2024-07-22 18:15:05.101269] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:53.290 [2024-07-22 18:15:05.101375] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:53.549 [2024-07-22 18:15:05.440415] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:53.549 [2024-07-22 18:15:05.440561] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.549 [2024-07-22 18:15:05.440580] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:54.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:54.483 18:15:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:54.483 18:15:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:54.483 18:15:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=64822 00:07:54.483 18:15:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 64822 /var/tmp/spdk2.sock 00:07:54.483 18:15:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:54.483 18:15:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 64822 ']' 00:07:54.483 18:15:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:54.483 18:15:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:54.483 18:15:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:54.483 18:15:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:54.483 18:15:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:54.483 [2024-07-22 18:15:06.494713] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:54.483 [2024-07-22 18:15:06.495197] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64822 ] 00:07:54.741 [2024-07-22 18:15:06.670091] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:54.741 [2024-07-22 18:15:06.670159] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:55.363 [2024-07-22 18:15:07.178732] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:55.363 [2024-07-22 18:15:07.182034] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:55.363 [2024-07-22 18:15:07.182069] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:07:57.263 18:15:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:57.263 18:15:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:57.263 18:15:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:57.263 18:15:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.263 18:15:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:57.263 18:15:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.263 18:15:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:57.263 18:15:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:07:57.263 18:15:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:57.263 18:15:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:07:57.263 18:15:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:57.263 18:15:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:07:57.263 18:15:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:57.263 18:15:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:57.263 18:15:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.263 18:15:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:57.263 [2024-07-22 18:15:08.805197] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 64781 has claimed it. 00:07:57.263 2024/07/22 18:15:08 error on JSON-RPC call, method: framework_enable_cpumask_locks, params: map[], err: error received for framework_enable_cpumask_locks method, err: Code=-32603 Msg=Failed to claim CPU core: 2 00:07:57.263 request: 00:07:57.263 { 00:07:57.263 "method": "framework_enable_cpumask_locks", 00:07:57.263 "params": {} 00:07:57.263 } 00:07:57.263 Got JSON-RPC error response 00:07:57.263 GoRPCClient: error on JSON-RPC call 00:07:57.263 18:15:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:07:57.263 18:15:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:07:57.263 18:15:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:57.263 18:15:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:57.263 18:15:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:57.263 18:15:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 64781 /var/tmp/spdk.sock 00:07:57.263 18:15:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 64781 ']' 00:07:57.263 18:15:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:57.263 18:15:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:57.263 18:15:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:57.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:57.263 18:15:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:57.263 18:15:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:57.263 18:15:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:57.263 18:15:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:57.263 18:15:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 64822 /var/tmp/spdk2.sock 00:07:57.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:57.264 18:15:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 64822 ']' 00:07:57.264 18:15:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:57.264 18:15:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:57.264 18:15:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:57.264 18:15:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:57.264 18:15:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:57.522 18:15:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:57.522 18:15:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:57.522 18:15:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:57.523 18:15:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:57.523 18:15:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:57.523 18:15:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:57.523 ************************************ 00:07:57.523 END TEST locking_overlapped_coremask_via_rpc 00:07:57.523 ************************************ 00:07:57.523 00:07:57.523 real 0m4.635s 00:07:57.523 user 0m1.367s 00:07:57.523 sys 0m0.269s 00:07:57.523 18:15:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:57.523 18:15:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:57.523 18:15:09 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:57.523 18:15:09 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:57.523 18:15:09 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 64781 ]] 00:07:57.523 18:15:09 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 64781 00:07:57.523 18:15:09 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 64781 ']' 00:07:57.523 18:15:09 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 64781 00:07:57.523 18:15:09 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:07:57.523 18:15:09 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:57.523 18:15:09 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64781 00:07:57.523 killing process with pid 64781 00:07:57.523 18:15:09 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:57.523 18:15:09 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:57.523 18:15:09 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64781' 00:07:57.523 18:15:09 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 64781 00:07:57.523 18:15:09 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 64781 00:08:00.055 18:15:11 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 64822 ]] 00:08:00.055 18:15:11 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 64822 00:08:00.055 18:15:11 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 64822 ']' 00:08:00.055 18:15:11 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 64822 00:08:00.055 18:15:11 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:08:00.055 18:15:11 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:00.055 18:15:11 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64822 00:08:00.055 killing process with pid 64822 00:08:00.055 18:15:11 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:08:00.055 18:15:11 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:08:00.055 18:15:11 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64822' 00:08:00.055 18:15:11 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 64822 00:08:00.055 18:15:11 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 64822 00:08:02.609 18:15:14 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:02.609 Process with pid 64781 is not found 00:08:02.609 18:15:14 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:08:02.609 18:15:14 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 64781 ]] 00:08:02.609 18:15:14 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 64781 00:08:02.609 18:15:14 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 64781 ']' 00:08:02.609 18:15:14 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 64781 00:08:02.609 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (64781) - No such process 00:08:02.609 18:15:14 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 64781 is not found' 00:08:02.609 18:15:14 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 64822 ]] 00:08:02.609 18:15:14 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 64822 00:08:02.609 18:15:14 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 64822 ']' 00:08:02.609 18:15:14 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 64822 00:08:02.609 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (64822) - No such process 00:08:02.609 Process with pid 64822 is not found 00:08:02.609 18:15:14 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 64822 is not found' 00:08:02.609 18:15:14 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:02.609 00:08:02.609 real 0m50.858s 00:08:02.609 user 1m23.812s 00:08:02.609 sys 0m8.034s 00:08:02.609 18:15:14 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:02.609 ************************************ 00:08:02.609 END TEST cpu_locks 00:08:02.609 ************************************ 00:08:02.609 18:15:14 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:02.609 18:15:14 event -- common/autotest_common.sh@1142 -- # return 0 00:08:02.609 ************************************ 00:08:02.609 END TEST event 00:08:02.609 ************************************ 00:08:02.609 00:08:02.609 real 1m23.626s 00:08:02.609 user 2m25.752s 00:08:02.609 sys 0m12.638s 00:08:02.609 18:15:14 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:02.609 18:15:14 event -- common/autotest_common.sh@10 -- # set +x 00:08:02.609 18:15:14 -- common/autotest_common.sh@1142 -- # return 0 00:08:02.609 18:15:14 -- spdk/autotest.sh@182 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:08:02.609 18:15:14 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:02.609 18:15:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:02.609 18:15:14 -- common/autotest_common.sh@10 -- # set +x 00:08:02.609 ************************************ 00:08:02.609 START TEST thread 00:08:02.609 ************************************ 00:08:02.609 18:15:14 thread -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:08:02.609 * Looking for test storage... 00:08:02.609 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:08:02.609 18:15:14 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:02.609 18:15:14 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:08:02.609 18:15:14 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:02.609 18:15:14 thread -- common/autotest_common.sh@10 -- # set +x 00:08:02.609 ************************************ 00:08:02.609 START TEST thread_poller_perf 00:08:02.610 ************************************ 00:08:02.610 18:15:14 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:02.610 [2024-07-22 18:15:14.301898] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:02.610 [2024-07-22 18:15:14.302183] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65026 ] 00:08:02.610 [2024-07-22 18:15:14.469754] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.866 [2024-07-22 18:15:14.770087] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.866 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:08:04.237 ====================================== 00:08:04.237 busy:2211330525 (cyc) 00:08:04.237 total_run_count: 299000 00:08:04.237 tsc_hz: 2200000000 (cyc) 00:08:04.237 ====================================== 00:08:04.237 poller_cost: 7395 (cyc), 3361 (nsec) 00:08:04.237 00:08:04.237 real 0m1.975s 00:08:04.237 user 0m1.736s 00:08:04.237 sys 0m0.126s 00:08:04.237 ************************************ 00:08:04.238 END TEST thread_poller_perf 00:08:04.238 ************************************ 00:08:04.238 18:15:16 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:04.238 18:15:16 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:04.499 18:15:16 thread -- common/autotest_common.sh@1142 -- # return 0 00:08:04.499 18:15:16 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:04.499 18:15:16 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:08:04.499 18:15:16 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:04.499 18:15:16 thread -- common/autotest_common.sh@10 -- # set +x 00:08:04.499 ************************************ 00:08:04.499 START TEST thread_poller_perf 00:08:04.499 ************************************ 00:08:04.500 18:15:16 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:04.500 [2024-07-22 18:15:16.337994] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:04.500 [2024-07-22 18:15:16.338132] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65068 ] 00:08:04.500 [2024-07-22 18:15:16.498517] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.763 [2024-07-22 18:15:16.776475] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.763 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:08:06.663 ====================================== 00:08:06.663 busy:2204375184 (cyc) 00:08:06.663 total_run_count: 3690000 00:08:06.663 tsc_hz: 2200000000 (cyc) 00:08:06.663 ====================================== 00:08:06.663 poller_cost: 597 (cyc), 271 (nsec) 00:08:06.663 00:08:06.663 real 0m1.977s 00:08:06.663 user 0m1.732s 00:08:06.663 sys 0m0.132s 00:08:06.663 18:15:18 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:06.663 ************************************ 00:08:06.663 END TEST thread_poller_perf 00:08:06.663 ************************************ 00:08:06.663 18:15:18 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:06.663 18:15:18 thread -- common/autotest_common.sh@1142 -- # return 0 00:08:06.663 18:15:18 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:08:06.663 00:08:06.663 real 0m4.151s 00:08:06.663 user 0m3.536s 00:08:06.663 sys 0m0.378s 00:08:06.663 18:15:18 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:06.663 ************************************ 00:08:06.663 END TEST thread 00:08:06.663 ************************************ 00:08:06.663 18:15:18 thread -- common/autotest_common.sh@10 -- # set +x 00:08:06.663 18:15:18 -- common/autotest_common.sh@1142 -- # return 0 00:08:06.663 18:15:18 -- spdk/autotest.sh@183 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:08:06.663 18:15:18 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:06.663 18:15:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:06.663 18:15:18 -- common/autotest_common.sh@10 -- # set +x 00:08:06.663 ************************************ 00:08:06.663 START TEST accel 00:08:06.663 ************************************ 00:08:06.663 18:15:18 accel -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:08:06.663 * Looking for test storage... 00:08:06.663 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:08:06.663 18:15:18 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:08:06.663 18:15:18 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:08:06.663 18:15:18 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:08:06.663 18:15:18 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=65149 00:08:06.663 18:15:18 accel -- accel/accel.sh@63 -- # waitforlisten 65149 00:08:06.663 18:15:18 accel -- common/autotest_common.sh@829 -- # '[' -z 65149 ']' 00:08:06.663 18:15:18 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:06.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:06.663 18:15:18 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:06.663 18:15:18 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:06.663 18:15:18 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:06.663 18:15:18 accel -- common/autotest_common.sh@10 -- # set +x 00:08:06.663 18:15:18 accel -- accel/accel.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:08:06.663 18:15:18 accel -- accel/accel.sh@61 -- # build_accel_config 00:08:06.663 18:15:18 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:06.663 18:15:18 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:06.663 18:15:18 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:06.663 18:15:18 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:06.663 18:15:18 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:06.663 18:15:18 accel -- accel/accel.sh@40 -- # local IFS=, 00:08:06.663 18:15:18 accel -- accel/accel.sh@41 -- # jq -r . 00:08:06.663 [2024-07-22 18:15:18.584970] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:06.663 [2024-07-22 18:15:18.585196] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65149 ] 00:08:06.921 [2024-07-22 18:15:18.756711] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.179 [2024-07-22 18:15:19.031845] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.149 18:15:19 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:08.149 18:15:19 accel -- common/autotest_common.sh@862 -- # return 0 00:08:08.149 18:15:19 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:08:08.149 18:15:19 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:08:08.149 18:15:19 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:08:08.149 18:15:19 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:08:08.149 18:15:19 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:08:08.149 18:15:19 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:08:08.149 18:15:19 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:08:08.149 18:15:19 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:08.149 18:15:19 accel -- common/autotest_common.sh@10 -- # set +x 00:08:08.149 18:15:19 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:08.149 18:15:19 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:08.149 18:15:19 accel -- accel/accel.sh@72 -- # IFS== 00:08:08.149 18:15:19 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:08.149 18:15:19 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:08.149 18:15:19 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:08.149 18:15:19 accel -- accel/accel.sh@72 -- # IFS== 00:08:08.149 18:15:19 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:08.149 18:15:19 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:08.149 18:15:19 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:08.149 18:15:19 accel -- accel/accel.sh@72 -- # IFS== 00:08:08.149 18:15:19 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:08.149 18:15:19 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:08.149 18:15:19 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:08.149 18:15:19 accel -- accel/accel.sh@72 -- # IFS== 00:08:08.149 18:15:19 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:08.149 18:15:19 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:08.149 18:15:19 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:08.149 18:15:19 accel -- accel/accel.sh@72 -- # IFS== 00:08:08.149 18:15:19 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:08.149 18:15:19 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:08.149 18:15:19 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:08.149 18:15:19 accel -- accel/accel.sh@72 -- # IFS== 00:08:08.149 18:15:19 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:08.149 18:15:19 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:08.149 18:15:19 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:08.149 18:15:19 accel -- accel/accel.sh@72 -- # IFS== 00:08:08.149 18:15:19 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:08.149 18:15:19 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:08.149 18:15:19 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:08.149 18:15:19 accel -- accel/accel.sh@72 -- # IFS== 00:08:08.149 18:15:19 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:08.149 18:15:19 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:08.149 18:15:19 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:08.149 18:15:19 accel -- accel/accel.sh@72 -- # IFS== 00:08:08.149 18:15:19 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:08.149 18:15:19 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:08.149 18:15:19 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:08.149 18:15:19 accel -- accel/accel.sh@72 -- # IFS== 00:08:08.149 18:15:19 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:08.149 18:15:19 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:08.149 18:15:19 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:08.149 18:15:19 accel -- accel/accel.sh@72 -- # IFS== 00:08:08.149 18:15:19 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:08.149 18:15:19 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:08.149 18:15:19 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:08.149 18:15:19 accel -- accel/accel.sh@72 -- # IFS== 00:08:08.149 18:15:19 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:08.149 18:15:19 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:08.149 18:15:19 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:08.149 18:15:19 accel -- accel/accel.sh@72 -- # IFS== 00:08:08.149 18:15:19 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:08.149 18:15:19 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:08.149 18:15:19 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:08.149 18:15:19 accel -- accel/accel.sh@72 -- # IFS== 00:08:08.149 18:15:19 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:08.149 18:15:19 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:08.149 18:15:19 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:08.149 18:15:19 accel -- accel/accel.sh@72 -- # IFS== 00:08:08.149 18:15:19 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:08.149 18:15:19 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:08.149 18:15:19 accel -- accel/accel.sh@75 -- # killprocess 65149 00:08:08.149 18:15:19 accel -- common/autotest_common.sh@948 -- # '[' -z 65149 ']' 00:08:08.149 18:15:19 accel -- common/autotest_common.sh@952 -- # kill -0 65149 00:08:08.149 18:15:19 accel -- common/autotest_common.sh@953 -- # uname 00:08:08.149 18:15:19 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:08.149 18:15:19 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 65149 00:08:08.149 killing process with pid 65149 00:08:08.149 18:15:20 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:08.150 18:15:20 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:08.150 18:15:20 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 65149' 00:08:08.150 18:15:20 accel -- common/autotest_common.sh@967 -- # kill 65149 00:08:08.150 18:15:20 accel -- common/autotest_common.sh@972 -- # wait 65149 00:08:10.679 18:15:22 accel -- accel/accel.sh@76 -- # trap - ERR 00:08:10.679 18:15:22 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:08:10.679 18:15:22 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:10.679 18:15:22 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:10.679 18:15:22 accel -- common/autotest_common.sh@10 -- # set +x 00:08:10.679 18:15:22 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:08:10.679 18:15:22 accel.accel_help -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:08:10.679 18:15:22 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:08:10.679 18:15:22 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:10.679 18:15:22 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:10.679 18:15:22 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:10.679 18:15:22 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:10.679 18:15:22 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:10.679 18:15:22 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:08:10.679 18:15:22 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:08:10.679 18:15:22 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:10.679 18:15:22 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:08:10.679 18:15:22 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:10.679 18:15:22 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:08:10.679 18:15:22 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:08:10.679 18:15:22 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:10.679 18:15:22 accel -- common/autotest_common.sh@10 -- # set +x 00:08:10.679 ************************************ 00:08:10.679 START TEST accel_missing_filename 00:08:10.679 ************************************ 00:08:10.679 18:15:22 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:08:10.679 18:15:22 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:08:10.679 18:15:22 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:08:10.679 18:15:22 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:08:10.679 18:15:22 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:10.679 18:15:22 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:08:10.679 18:15:22 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:10.679 18:15:22 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:08:10.679 18:15:22 accel.accel_missing_filename -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:08:10.679 18:15:22 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:08:10.679 18:15:22 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:10.679 18:15:22 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:10.679 18:15:22 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:10.679 18:15:22 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:10.679 18:15:22 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:10.679 18:15:22 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:08:10.679 18:15:22 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:08:10.679 [2024-07-22 18:15:22.622043] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:10.679 [2024-07-22 18:15:22.622224] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65248 ] 00:08:10.942 [2024-07-22 18:15:22.798693] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.201 [2024-07-22 18:15:23.078922] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.466 [2024-07-22 18:15:23.311072] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:12.033 [2024-07-22 18:15:23.850003] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:08:12.290 A filename is required. 00:08:12.548 ************************************ 00:08:12.548 END TEST accel_missing_filename 00:08:12.549 ************************************ 00:08:12.549 18:15:24 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:08:12.549 18:15:24 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:12.549 18:15:24 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:08:12.549 18:15:24 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:08:12.549 18:15:24 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:08:12.549 18:15:24 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:12.549 00:08:12.549 real 0m1.740s 00:08:12.549 user 0m1.440s 00:08:12.549 sys 0m0.236s 00:08:12.549 18:15:24 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:12.549 18:15:24 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:08:12.549 18:15:24 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:12.549 18:15:24 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:12.549 18:15:24 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:08:12.549 18:15:24 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:12.549 18:15:24 accel -- common/autotest_common.sh@10 -- # set +x 00:08:12.549 ************************************ 00:08:12.549 START TEST accel_compress_verify 00:08:12.549 ************************************ 00:08:12.549 18:15:24 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:12.549 18:15:24 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:08:12.549 18:15:24 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:12.549 18:15:24 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:08:12.549 18:15:24 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:12.549 18:15:24 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:08:12.549 18:15:24 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:12.549 18:15:24 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:12.549 18:15:24 accel.accel_compress_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:12.549 18:15:24 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:08:12.549 18:15:24 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:12.549 18:15:24 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:12.549 18:15:24 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:12.549 18:15:24 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:12.549 18:15:24 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:12.549 18:15:24 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:08:12.549 18:15:24 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:08:12.549 [2024-07-22 18:15:24.421187] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:12.549 [2024-07-22 18:15:24.421499] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65284 ] 00:08:12.808 [2024-07-22 18:15:24.595291] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.066 [2024-07-22 18:15:24.871692] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.324 [2024-07-22 18:15:25.096501] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:13.920 [2024-07-22 18:15:25.633355] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:08:14.179 00:08:14.179 Compression does not support the verify option, aborting. 00:08:14.179 18:15:26 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:08:14.179 18:15:26 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:14.179 18:15:26 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:08:14.179 18:15:26 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:08:14.179 18:15:26 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:08:14.179 18:15:26 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:14.179 00:08:14.179 real 0m1.721s 00:08:14.179 user 0m1.418s 00:08:14.179 sys 0m0.236s 00:08:14.179 ************************************ 00:08:14.179 END TEST accel_compress_verify 00:08:14.179 ************************************ 00:08:14.179 18:15:26 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:14.179 18:15:26 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:08:14.179 18:15:26 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:14.179 18:15:26 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:08:14.179 18:15:26 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:08:14.179 18:15:26 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:14.179 18:15:26 accel -- common/autotest_common.sh@10 -- # set +x 00:08:14.179 ************************************ 00:08:14.179 START TEST accel_wrong_workload 00:08:14.179 ************************************ 00:08:14.179 18:15:26 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:08:14.179 18:15:26 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:08:14.179 18:15:26 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:08:14.179 18:15:26 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:08:14.179 18:15:26 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:14.179 18:15:26 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:08:14.179 18:15:26 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:14.179 18:15:26 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:08:14.179 18:15:26 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:08:14.179 18:15:26 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:08:14.179 18:15:26 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:14.179 18:15:26 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:14.179 18:15:26 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:14.179 18:15:26 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:14.179 18:15:26 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:14.179 18:15:26 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:08:14.179 18:15:26 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:08:14.179 Unsupported workload type: foobar 00:08:14.179 [2024-07-22 18:15:26.188897] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:08:14.438 accel_perf options: 00:08:14.438 [-h help message] 00:08:14.438 [-q queue depth per core] 00:08:14.438 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:08:14.438 [-T number of threads per core 00:08:14.438 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:08:14.438 [-t time in seconds] 00:08:14.438 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:08:14.438 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:08:14.438 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:08:14.438 [-l for compress/decompress workloads, name of uncompressed input file 00:08:14.438 [-S for crc32c workload, use this seed value (default 0) 00:08:14.438 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:08:14.439 [-f for fill workload, use this BYTE value (default 255) 00:08:14.439 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:08:14.439 [-y verify result if this switch is on] 00:08:14.439 [-a tasks to allocate per core (default: same value as -q)] 00:08:14.439 Can be used to spread operations across a wider range of memory. 00:08:14.439 18:15:26 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:08:14.439 18:15:26 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:14.439 18:15:26 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:14.439 18:15:26 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:14.439 00:08:14.439 real 0m0.082s 00:08:14.439 user 0m0.080s 00:08:14.439 sys 0m0.046s 00:08:14.439 ************************************ 00:08:14.439 18:15:26 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:14.439 18:15:26 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:08:14.439 END TEST accel_wrong_workload 00:08:14.439 ************************************ 00:08:14.439 18:15:26 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:14.439 18:15:26 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:08:14.439 18:15:26 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:08:14.439 18:15:26 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:14.439 18:15:26 accel -- common/autotest_common.sh@10 -- # set +x 00:08:14.439 ************************************ 00:08:14.439 START TEST accel_negative_buffers 00:08:14.439 ************************************ 00:08:14.439 18:15:26 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:08:14.439 18:15:26 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:08:14.439 18:15:26 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:08:14.439 18:15:26 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:08:14.439 18:15:26 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:14.439 18:15:26 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:08:14.439 18:15:26 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:14.439 18:15:26 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:08:14.439 18:15:26 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:08:14.439 18:15:26 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:08:14.439 18:15:26 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:14.439 18:15:26 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:14.439 18:15:26 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:14.439 18:15:26 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:14.439 18:15:26 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:14.439 18:15:26 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:08:14.439 18:15:26 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:08:14.439 -x option must be non-negative. 00:08:14.439 [2024-07-22 18:15:26.334052] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:08:14.439 accel_perf options: 00:08:14.439 [-h help message] 00:08:14.439 [-q queue depth per core] 00:08:14.439 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:08:14.439 [-T number of threads per core 00:08:14.439 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:08:14.439 [-t time in seconds] 00:08:14.439 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:08:14.439 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:08:14.439 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:08:14.439 [-l for compress/decompress workloads, name of uncompressed input file 00:08:14.439 [-S for crc32c workload, use this seed value (default 0) 00:08:14.439 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:08:14.439 [-f for fill workload, use this BYTE value (default 255) 00:08:14.439 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:08:14.439 [-y verify result if this switch is on] 00:08:14.439 [-a tasks to allocate per core (default: same value as -q)] 00:08:14.439 Can be used to spread operations across a wider range of memory. 00:08:14.439 18:15:26 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:08:14.439 18:15:26 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:14.439 18:15:26 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:14.439 18:15:26 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:14.439 00:08:14.439 real 0m0.102s 00:08:14.439 user 0m0.094s 00:08:14.439 sys 0m0.051s 00:08:14.439 18:15:26 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:14.439 18:15:26 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:08:14.439 ************************************ 00:08:14.439 END TEST accel_negative_buffers 00:08:14.439 ************************************ 00:08:14.439 18:15:26 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:14.439 18:15:26 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:08:14.439 18:15:26 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:08:14.439 18:15:26 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:14.439 18:15:26 accel -- common/autotest_common.sh@10 -- # set +x 00:08:14.439 ************************************ 00:08:14.439 START TEST accel_crc32c 00:08:14.439 ************************************ 00:08:14.439 18:15:26 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:08:14.439 18:15:26 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:08:14.439 18:15:26 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:08:14.439 18:15:26 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:14.439 18:15:26 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:08:14.439 18:15:26 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:14.439 18:15:26 accel.accel_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:08:14.439 18:15:26 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:08:14.439 18:15:26 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:14.439 18:15:26 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:14.439 18:15:26 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:14.439 18:15:26 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:14.439 18:15:26 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:14.439 18:15:26 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:08:14.439 18:15:26 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:08:14.698 [2024-07-22 18:15:26.479069] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:14.698 [2024-07-22 18:15:26.479285] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65362 ] 00:08:14.698 [2024-07-22 18:15:26.666111] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.956 [2024-07-22 18:15:26.937065] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.215 18:15:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:15.215 18:15:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:15.215 18:15:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:15.215 18:15:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:15.215 18:15:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:15.215 18:15:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:15.215 18:15:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:15.215 18:15:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:15.215 18:15:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:08:15.215 18:15:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:15.215 18:15:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:15.215 18:15:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:15.215 18:15:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:15.215 18:15:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:15.215 18:15:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:15.215 18:15:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:15.215 18:15:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:15.215 18:15:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:15.215 18:15:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:15.215 18:15:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:15.215 18:15:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:08:15.215 18:15:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:15.215 18:15:27 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:08:15.215 18:15:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:15.215 18:15:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:15.215 18:15:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:08:15.215 18:15:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:15.215 18:15:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:15.215 18:15:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:15.215 18:15:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:15.215 18:15:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:15.215 18:15:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:15.215 18:15:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:15.215 18:15:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:15.215 18:15:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:15.215 18:15:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:15.215 18:15:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:15.215 18:15:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:08:15.215 18:15:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:15.215 18:15:27 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:08:15.215 18:15:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:15.215 18:15:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:15.216 18:15:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:08:15.216 18:15:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:15.216 18:15:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:15.216 18:15:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:15.216 18:15:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:08:15.216 18:15:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:15.216 18:15:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:15.216 18:15:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:15.216 18:15:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:08:15.216 18:15:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:15.216 18:15:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:15.216 18:15:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:15.216 18:15:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:08:15.216 18:15:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:15.216 18:15:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:15.216 18:15:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:15.216 18:15:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:08:15.216 18:15:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:15.216 18:15:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:15.216 18:15:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:15.216 18:15:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:15.216 18:15:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:15.216 18:15:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:15.216 18:15:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:15.216 18:15:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:15.216 18:15:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:15.216 18:15:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:15.216 18:15:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:17.747 18:15:29 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:17.747 18:15:29 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:17.747 18:15:29 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:17.747 18:15:29 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:17.747 18:15:29 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:17.747 18:15:29 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:17.747 18:15:29 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:17.747 18:15:29 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:17.747 18:15:29 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:17.747 18:15:29 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:17.747 18:15:29 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:17.747 18:15:29 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:17.747 18:15:29 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:17.747 18:15:29 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:17.747 18:15:29 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:17.747 18:15:29 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:17.747 18:15:29 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:17.747 18:15:29 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:17.747 18:15:29 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:17.747 18:15:29 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:17.747 18:15:29 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:17.747 18:15:29 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:17.747 18:15:29 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:17.747 18:15:29 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:17.747 18:15:29 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:17.747 18:15:29 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:08:17.747 18:15:29 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:17.747 00:08:17.747 real 0m2.740s 00:08:17.747 user 0m2.396s 00:08:17.747 sys 0m0.245s 00:08:17.747 18:15:29 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:17.747 18:15:29 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:08:17.747 ************************************ 00:08:17.747 END TEST accel_crc32c 00:08:17.747 ************************************ 00:08:17.747 18:15:29 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:17.747 18:15:29 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:08:17.747 18:15:29 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:08:17.747 18:15:29 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:17.747 18:15:29 accel -- common/autotest_common.sh@10 -- # set +x 00:08:17.747 ************************************ 00:08:17.747 START TEST accel_crc32c_C2 00:08:17.747 ************************************ 00:08:17.747 18:15:29 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:08:17.747 18:15:29 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:08:17.747 18:15:29 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:08:17.747 18:15:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:17.747 18:15:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:17.747 18:15:29 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:08:17.747 18:15:29 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:08:17.747 18:15:29 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:08:17.747 18:15:29 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:17.747 18:15:29 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:17.747 18:15:29 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:17.747 18:15:29 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:17.747 18:15:29 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:17.747 18:15:29 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:08:17.747 18:15:29 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:08:17.747 [2024-07-22 18:15:29.275259] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:17.747 [2024-07-22 18:15:29.275463] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65409 ] 00:08:17.747 [2024-07-22 18:15:29.455451] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.747 [2024-07-22 18:15:29.732555] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.005 18:15:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:18.005 18:15:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:18.005 18:15:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:18.005 18:15:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:18.005 18:15:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:18.005 18:15:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:18.005 18:15:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:18.005 18:15:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:18.005 18:15:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:08:18.005 18:15:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:18.005 18:15:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:18.005 18:15:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:18.005 18:15:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:18.005 18:15:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:18.005 18:15:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:18.005 18:15:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:18.005 18:15:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:18.005 18:15:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:18.005 18:15:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:18.005 18:15:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:18.005 18:15:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:08:18.005 18:15:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:18.005 18:15:29 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:08:18.005 18:15:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:18.005 18:15:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:18.005 18:15:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:08:18.005 18:15:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:18.005 18:15:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:18.005 18:15:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:18.005 18:15:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:18.005 18:15:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:18.005 18:15:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:18.005 18:15:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:18.005 18:15:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:18.005 18:15:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:18.005 18:15:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:18.005 18:15:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:18.005 18:15:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:08:18.005 18:15:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:18.005 18:15:29 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:08:18.006 18:15:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:18.006 18:15:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:18.006 18:15:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:08:18.006 18:15:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:18.006 18:15:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:18.006 18:15:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:18.006 18:15:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:08:18.006 18:15:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:18.006 18:15:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:18.006 18:15:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:18.006 18:15:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:08:18.006 18:15:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:18.006 18:15:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:18.006 18:15:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:18.006 18:15:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:08:18.006 18:15:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:18.006 18:15:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:18.006 18:15:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:18.006 18:15:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:08:18.006 18:15:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:18.006 18:15:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:18.006 18:15:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:18.006 18:15:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:18.006 18:15:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:18.006 18:15:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:18.006 18:15:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:18.006 18:15:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:18.006 18:15:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:18.006 18:15:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:18.006 18:15:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:19.906 18:15:31 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:19.906 18:15:31 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:19.906 18:15:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:19.906 18:15:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:19.906 18:15:31 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:19.906 18:15:31 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:19.906 18:15:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:19.906 18:15:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:19.906 18:15:31 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:19.906 18:15:31 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:19.906 18:15:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:19.906 18:15:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:19.906 18:15:31 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:19.906 18:15:31 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:19.906 18:15:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:19.906 18:15:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:19.906 18:15:31 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:19.906 18:15:31 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:19.906 18:15:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:19.906 18:15:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:19.906 18:15:31 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:19.906 18:15:31 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:19.906 18:15:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:19.906 18:15:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:19.906 18:15:31 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:19.906 18:15:31 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:08:19.906 18:15:31 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:19.906 00:08:19.906 real 0m2.625s 00:08:19.906 user 0m2.267s 00:08:19.906 sys 0m0.259s 00:08:19.906 18:15:31 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:19.906 18:15:31 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:08:19.906 ************************************ 00:08:19.906 END TEST accel_crc32c_C2 00:08:19.906 ************************************ 00:08:19.906 18:15:31 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:19.906 18:15:31 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:08:19.906 18:15:31 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:08:19.906 18:15:31 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:19.906 18:15:31 accel -- common/autotest_common.sh@10 -- # set +x 00:08:19.906 ************************************ 00:08:19.906 START TEST accel_copy 00:08:19.906 ************************************ 00:08:19.906 18:15:31 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:08:19.906 18:15:31 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:08:19.906 18:15:31 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:08:19.906 18:15:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:19.906 18:15:31 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:08:19.906 18:15:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:19.906 18:15:31 accel.accel_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:08:19.906 18:15:31 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:08:19.906 18:15:31 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:19.906 18:15:31 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:19.906 18:15:31 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:19.906 18:15:31 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:19.906 18:15:31 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:19.906 18:15:31 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:08:19.906 18:15:31 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:08:20.164 [2024-07-22 18:15:31.947665] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:20.164 [2024-07-22 18:15:31.947839] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65461 ] 00:08:20.164 [2024-07-22 18:15:32.114434] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.424 [2024-07-22 18:15:32.363548] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.682 18:15:32 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:20.682 18:15:32 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:20.682 18:15:32 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:20.682 18:15:32 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:20.682 18:15:32 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:20.682 18:15:32 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:20.682 18:15:32 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:20.682 18:15:32 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:20.682 18:15:32 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:08:20.682 18:15:32 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:20.682 18:15:32 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:20.682 18:15:32 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:20.682 18:15:32 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:20.682 18:15:32 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:20.682 18:15:32 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:20.682 18:15:32 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:20.682 18:15:32 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:20.682 18:15:32 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:20.682 18:15:32 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:20.682 18:15:32 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:20.682 18:15:32 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:08:20.682 18:15:32 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:20.682 18:15:32 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:08:20.682 18:15:32 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:20.682 18:15:32 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:20.682 18:15:32 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:20.682 18:15:32 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:20.682 18:15:32 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:20.682 18:15:32 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:20.682 18:15:32 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:20.682 18:15:32 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:20.682 18:15:32 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:20.682 18:15:32 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:20.682 18:15:32 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:08:20.682 18:15:32 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:20.682 18:15:32 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:08:20.682 18:15:32 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:20.682 18:15:32 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:20.682 18:15:32 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:08:20.682 18:15:32 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:20.682 18:15:32 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:20.682 18:15:32 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:20.682 18:15:32 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:08:20.682 18:15:32 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:20.683 18:15:32 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:20.683 18:15:32 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:20.683 18:15:32 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:08:20.683 18:15:32 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:20.683 18:15:32 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:20.683 18:15:32 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:20.683 18:15:32 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:08:20.683 18:15:32 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:20.683 18:15:32 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:20.683 18:15:32 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:20.683 18:15:32 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:08:20.683 18:15:32 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:20.683 18:15:32 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:20.683 18:15:32 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:20.683 18:15:32 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:20.683 18:15:32 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:20.683 18:15:32 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:20.683 18:15:32 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:20.683 18:15:32 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:20.683 18:15:32 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:20.683 18:15:32 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:20.683 18:15:32 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:22.581 18:15:34 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:22.581 18:15:34 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:22.581 18:15:34 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:22.581 18:15:34 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:22.581 18:15:34 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:22.581 18:15:34 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:22.581 18:15:34 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:22.581 18:15:34 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:22.581 18:15:34 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:22.581 18:15:34 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:22.581 18:15:34 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:22.581 18:15:34 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:22.581 18:15:34 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:22.581 18:15:34 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:22.581 18:15:34 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:22.581 18:15:34 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:22.581 18:15:34 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:22.581 18:15:34 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:22.581 18:15:34 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:22.581 18:15:34 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:22.581 18:15:34 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:22.581 18:15:34 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:22.581 18:15:34 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:22.581 18:15:34 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:22.581 18:15:34 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:22.581 18:15:34 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:08:22.581 18:15:34 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:22.581 00:08:22.581 real 0m2.569s 00:08:22.581 user 0m2.283s 00:08:22.581 sys 0m0.190s 00:08:22.581 18:15:34 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:22.581 18:15:34 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:08:22.581 ************************************ 00:08:22.581 END TEST accel_copy 00:08:22.581 ************************************ 00:08:22.581 18:15:34 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:22.581 18:15:34 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:08:22.581 18:15:34 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:08:22.581 18:15:34 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:22.581 18:15:34 accel -- common/autotest_common.sh@10 -- # set +x 00:08:22.581 ************************************ 00:08:22.581 START TEST accel_fill 00:08:22.581 ************************************ 00:08:22.581 18:15:34 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:08:22.581 18:15:34 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:08:22.581 18:15:34 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:08:22.581 18:15:34 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:22.581 18:15:34 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:22.581 18:15:34 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:08:22.581 18:15:34 accel.accel_fill -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:08:22.581 18:15:34 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:08:22.581 18:15:34 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:22.581 18:15:34 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:22.581 18:15:34 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:22.582 18:15:34 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:22.582 18:15:34 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:22.582 18:15:34 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:08:22.582 18:15:34 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:08:22.582 [2024-07-22 18:15:34.581181] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:22.582 [2024-07-22 18:15:34.581387] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65502 ] 00:08:22.840 [2024-07-22 18:15:34.758601] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.098 [2024-07-22 18:15:34.998600] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.357 18:15:35 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:23.357 18:15:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:23.357 18:15:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:23.357 18:15:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:23.357 18:15:35 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:23.357 18:15:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:23.357 18:15:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:23.357 18:15:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:23.357 18:15:35 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:08:23.357 18:15:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:23.357 18:15:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:23.357 18:15:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:23.357 18:15:35 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:23.357 18:15:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:23.357 18:15:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:23.357 18:15:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:23.357 18:15:35 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:23.357 18:15:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:23.357 18:15:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:23.357 18:15:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:23.357 18:15:35 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:08:23.357 18:15:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:23.357 18:15:35 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:08:23.357 18:15:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:23.357 18:15:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:23.357 18:15:35 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:08:23.357 18:15:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:23.357 18:15:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:23.357 18:15:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:23.357 18:15:35 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:23.357 18:15:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:23.357 18:15:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:23.357 18:15:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:23.357 18:15:35 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:23.357 18:15:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:23.357 18:15:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:23.357 18:15:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:23.357 18:15:35 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:08:23.357 18:15:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:23.357 18:15:35 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:08:23.357 18:15:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:23.357 18:15:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:23.357 18:15:35 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:08:23.357 18:15:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:23.357 18:15:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:23.357 18:15:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:23.357 18:15:35 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:08:23.357 18:15:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:23.357 18:15:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:23.357 18:15:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:23.357 18:15:35 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:08:23.357 18:15:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:23.357 18:15:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:23.357 18:15:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:23.357 18:15:35 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:08:23.357 18:15:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:23.357 18:15:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:23.357 18:15:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:23.357 18:15:35 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:08:23.357 18:15:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:23.357 18:15:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:23.357 18:15:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:23.357 18:15:35 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:23.357 18:15:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:23.357 18:15:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:23.357 18:15:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:23.357 18:15:35 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:23.357 18:15:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:23.357 18:15:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:23.357 18:15:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:25.330 18:15:37 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:25.330 18:15:37 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:25.330 18:15:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:25.330 18:15:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:25.330 18:15:37 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:25.330 18:15:37 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:25.330 18:15:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:25.330 18:15:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:25.330 18:15:37 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:25.330 18:15:37 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:25.330 18:15:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:25.330 18:15:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:25.330 18:15:37 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:25.330 18:15:37 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:25.330 18:15:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:25.330 18:15:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:25.330 18:15:37 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:25.330 18:15:37 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:25.330 18:15:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:25.330 18:15:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:25.330 18:15:37 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:25.330 18:15:37 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:25.330 18:15:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:25.330 18:15:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:25.330 18:15:37 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:25.330 18:15:37 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:08:25.330 18:15:37 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:25.330 00:08:25.330 real 0m2.557s 00:08:25.330 user 0m0.017s 00:08:25.330 sys 0m0.002s 00:08:25.330 ************************************ 00:08:25.330 END TEST accel_fill 00:08:25.330 ************************************ 00:08:25.330 18:15:37 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:25.330 18:15:37 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:08:25.330 18:15:37 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:25.330 18:15:37 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:08:25.330 18:15:37 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:08:25.330 18:15:37 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:25.330 18:15:37 accel -- common/autotest_common.sh@10 -- # set +x 00:08:25.330 ************************************ 00:08:25.330 START TEST accel_copy_crc32c 00:08:25.330 ************************************ 00:08:25.330 18:15:37 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:08:25.330 18:15:37 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:08:25.330 18:15:37 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:08:25.330 18:15:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:25.330 18:15:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:25.330 18:15:37 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:08:25.330 18:15:37 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:08:25.330 18:15:37 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:08:25.330 18:15:37 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:25.330 18:15:37 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:25.330 18:15:37 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:25.330 18:15:37 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:25.330 18:15:37 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:25.331 18:15:37 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:08:25.331 18:15:37 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:08:25.331 [2024-07-22 18:15:37.192628] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:25.331 [2024-07-22 18:15:37.192915] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65554 ] 00:08:25.589 [2024-07-22 18:15:37.364020] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.589 [2024-07-22 18:15:37.597547] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.848 18:15:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:25.848 18:15:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:25.848 18:15:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:25.848 18:15:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:25.848 18:15:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:25.848 18:15:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:25.848 18:15:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:25.848 18:15:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:25.848 18:15:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:08:25.848 18:15:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:25.848 18:15:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:25.848 18:15:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:25.848 18:15:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:25.848 18:15:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:25.848 18:15:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:25.848 18:15:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:25.848 18:15:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:25.848 18:15:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:25.848 18:15:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:25.848 18:15:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:25.848 18:15:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:08:25.848 18:15:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:25.848 18:15:37 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:08:25.848 18:15:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:25.848 18:15:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:25.848 18:15:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:08:25.848 18:15:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:25.848 18:15:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:25.848 18:15:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:25.848 18:15:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:25.848 18:15:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:25.848 18:15:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:25.848 18:15:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:25.848 18:15:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:25.848 18:15:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:25.848 18:15:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:25.848 18:15:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:25.848 18:15:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:25.848 18:15:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:25.848 18:15:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:25.848 18:15:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:25.848 18:15:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:08:25.848 18:15:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:25.848 18:15:37 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:08:25.848 18:15:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:25.848 18:15:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:25.848 18:15:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:08:25.848 18:15:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:25.848 18:15:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:25.848 18:15:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:25.848 18:15:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:08:25.848 18:15:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:25.848 18:15:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:25.848 18:15:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:25.848 18:15:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:08:25.848 18:15:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:25.848 18:15:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:25.848 18:15:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:25.848 18:15:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:08:25.848 18:15:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:25.848 18:15:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:25.848 18:15:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:25.848 18:15:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:08:25.848 18:15:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:25.849 18:15:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:25.849 18:15:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:25.849 18:15:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:25.849 18:15:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:25.849 18:15:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:25.849 18:15:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:25.849 18:15:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:25.849 18:15:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:25.849 18:15:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:25.849 18:15:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:27.768 18:15:39 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:27.768 18:15:39 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:27.768 18:15:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:27.768 18:15:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:27.768 18:15:39 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:27.768 18:15:39 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:27.768 18:15:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:27.768 18:15:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:27.768 18:15:39 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:27.768 18:15:39 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:27.768 18:15:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:27.768 18:15:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:27.768 18:15:39 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:27.768 18:15:39 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:27.768 18:15:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:27.768 18:15:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:27.768 18:15:39 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:27.768 18:15:39 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:27.768 18:15:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:27.768 18:15:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:27.768 18:15:39 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:27.768 18:15:39 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:27.768 18:15:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:27.768 18:15:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:27.768 18:15:39 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:27.768 18:15:39 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:08:27.768 18:15:39 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:27.768 00:08:27.768 real 0m2.513s 00:08:27.768 user 0m2.224s 00:08:27.768 sys 0m0.195s 00:08:27.768 18:15:39 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:27.768 18:15:39 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:08:27.768 ************************************ 00:08:27.768 END TEST accel_copy_crc32c 00:08:27.768 ************************************ 00:08:27.768 18:15:39 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:27.768 18:15:39 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:08:27.768 18:15:39 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:08:27.768 18:15:39 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:27.768 18:15:39 accel -- common/autotest_common.sh@10 -- # set +x 00:08:27.768 ************************************ 00:08:27.768 START TEST accel_copy_crc32c_C2 00:08:27.768 ************************************ 00:08:27.768 18:15:39 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:08:27.768 18:15:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:08:27.768 18:15:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:08:27.768 18:15:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:27.768 18:15:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:08:27.768 18:15:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:27.768 18:15:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:08:27.768 18:15:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:08:27.768 18:15:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:27.768 18:15:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:27.768 18:15:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:27.768 18:15:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:27.768 18:15:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:27.768 18:15:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:08:27.768 18:15:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:08:27.768 [2024-07-22 18:15:39.758204] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:27.768 [2024-07-22 18:15:39.758415] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65595 ] 00:08:28.027 [2024-07-22 18:15:39.947976] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.285 [2024-07-22 18:15:40.214240] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.544 18:15:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:28.544 18:15:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:28.544 18:15:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:28.544 18:15:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:28.544 18:15:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:28.544 18:15:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:28.544 18:15:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:28.544 18:15:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:28.544 18:15:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:08:28.544 18:15:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:28.544 18:15:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:28.544 18:15:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:28.544 18:15:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:28.544 18:15:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:28.544 18:15:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:28.544 18:15:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:28.544 18:15:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:28.544 18:15:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:28.544 18:15:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:28.544 18:15:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:28.544 18:15:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:08:28.544 18:15:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:28.544 18:15:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:08:28.544 18:15:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:28.544 18:15:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:28.544 18:15:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:08:28.544 18:15:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:28.544 18:15:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:28.544 18:15:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:28.544 18:15:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:28.544 18:15:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:28.544 18:15:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:28.544 18:15:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:28.544 18:15:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:08:28.544 18:15:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:28.544 18:15:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:28.544 18:15:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:28.544 18:15:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:28.544 18:15:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:28.544 18:15:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:28.544 18:15:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:28.544 18:15:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:08:28.544 18:15:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:28.544 18:15:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:08:28.544 18:15:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:28.544 18:15:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:28.544 18:15:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:08:28.544 18:15:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:28.544 18:15:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:28.544 18:15:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:28.544 18:15:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:08:28.544 18:15:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:28.544 18:15:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:28.544 18:15:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:28.544 18:15:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:08:28.544 18:15:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:28.544 18:15:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:28.544 18:15:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:28.544 18:15:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:08:28.544 18:15:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:28.544 18:15:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:28.544 18:15:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:28.544 18:15:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:08:28.544 18:15:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:28.544 18:15:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:28.544 18:15:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:28.544 18:15:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:28.544 18:15:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:28.544 18:15:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:28.544 18:15:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:28.544 18:15:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:28.544 18:15:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:28.544 18:15:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:28.544 18:15:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:30.459 18:15:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:30.459 18:15:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:30.459 18:15:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:30.459 18:15:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:30.459 18:15:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:30.459 18:15:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:30.459 18:15:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:30.459 18:15:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:30.459 18:15:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:30.459 18:15:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:30.459 18:15:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:30.459 18:15:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:30.459 18:15:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:30.459 18:15:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:30.459 18:15:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:30.459 18:15:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:30.459 18:15:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:30.459 18:15:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:30.459 18:15:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:30.459 18:15:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:30.459 18:15:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:30.459 18:15:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:30.459 18:15:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:30.460 18:15:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:30.460 18:15:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:30.460 18:15:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:08:30.460 18:15:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:30.460 00:08:30.460 real 0m2.576s 00:08:30.460 user 0m2.265s 00:08:30.460 sys 0m0.216s 00:08:30.460 18:15:42 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:30.460 18:15:42 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:08:30.460 ************************************ 00:08:30.460 END TEST accel_copy_crc32c_C2 00:08:30.460 ************************************ 00:08:30.460 18:15:42 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:30.460 18:15:42 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:08:30.460 18:15:42 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:08:30.460 18:15:42 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:30.460 18:15:42 accel -- common/autotest_common.sh@10 -- # set +x 00:08:30.460 ************************************ 00:08:30.460 START TEST accel_dualcast 00:08:30.460 ************************************ 00:08:30.460 18:15:42 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:08:30.460 18:15:42 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:08:30.460 18:15:42 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:08:30.460 18:15:42 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:30.460 18:15:42 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:08:30.460 18:15:42 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:30.460 18:15:42 accel.accel_dualcast -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:08:30.460 18:15:42 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:08:30.460 18:15:42 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:30.460 18:15:42 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:30.460 18:15:42 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:30.460 18:15:42 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:30.460 18:15:42 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:30.460 18:15:42 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:08:30.460 18:15:42 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:08:30.460 [2024-07-22 18:15:42.378912] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:30.460 [2024-07-22 18:15:42.379060] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65647 ] 00:08:30.718 [2024-07-22 18:15:42.545161] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.976 [2024-07-22 18:15:42.780930] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.976 18:15:42 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:30.976 18:15:42 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:30.976 18:15:42 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:30.976 18:15:42 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:30.976 18:15:42 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:30.976 18:15:42 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:30.976 18:15:42 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:30.976 18:15:42 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:30.976 18:15:42 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:08:30.976 18:15:42 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:30.976 18:15:42 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:30.976 18:15:42 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:30.976 18:15:42 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:30.976 18:15:42 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:30.976 18:15:42 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:30.976 18:15:42 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:30.976 18:15:42 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:30.976 18:15:42 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:30.976 18:15:42 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:30.976 18:15:42 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:30.976 18:15:42 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:08:30.976 18:15:42 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:30.976 18:15:42 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:08:30.976 18:15:42 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:30.976 18:15:42 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:30.976 18:15:42 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:30.976 18:15:42 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:30.976 18:15:42 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:30.976 18:15:42 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:30.976 18:15:42 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:30.976 18:15:42 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:30.976 18:15:42 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:30.976 18:15:42 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:30.976 18:15:42 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:08:30.976 18:15:42 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:30.977 18:15:42 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:08:30.977 18:15:42 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:30.977 18:15:42 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:30.977 18:15:42 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:08:30.977 18:15:42 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:30.977 18:15:42 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:30.977 18:15:42 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:30.977 18:15:42 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:08:30.977 18:15:42 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:30.977 18:15:42 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:30.977 18:15:42 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:30.977 18:15:42 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:08:30.977 18:15:42 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:30.977 18:15:42 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:30.977 18:15:42 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:30.977 18:15:42 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:08:30.977 18:15:42 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:30.977 18:15:42 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:30.977 18:15:42 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:30.977 18:15:42 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:08:30.977 18:15:42 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:30.977 18:15:42 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:30.977 18:15:42 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:30.977 18:15:42 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:30.977 18:15:42 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:30.977 18:15:42 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:30.977 18:15:42 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:30.977 18:15:42 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:30.977 18:15:42 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:30.977 18:15:42 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:30.977 18:15:42 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:32.877 18:15:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:32.877 18:15:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:32.877 18:15:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:32.877 18:15:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:32.877 18:15:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:32.877 18:15:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:32.877 18:15:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:32.877 18:15:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:32.877 18:15:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:32.877 18:15:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:32.877 18:15:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:32.877 18:15:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:32.877 18:15:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:32.877 18:15:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:32.877 18:15:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:32.877 18:15:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:32.877 18:15:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:32.877 18:15:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:32.877 18:15:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:32.877 18:15:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:32.877 18:15:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:32.877 18:15:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:32.877 18:15:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:32.877 18:15:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:32.877 18:15:44 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:32.877 18:15:44 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:08:32.877 18:15:44 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:32.877 00:08:32.877 real 0m2.536s 00:08:32.877 user 0m2.241s 00:08:32.877 sys 0m0.200s 00:08:32.877 18:15:44 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:32.877 18:15:44 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:08:32.877 ************************************ 00:08:32.877 END TEST accel_dualcast 00:08:32.877 ************************************ 00:08:33.209 18:15:44 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:33.209 18:15:44 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:08:33.209 18:15:44 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:08:33.209 18:15:44 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:33.209 18:15:44 accel -- common/autotest_common.sh@10 -- # set +x 00:08:33.209 ************************************ 00:08:33.209 START TEST accel_compare 00:08:33.209 ************************************ 00:08:33.209 18:15:44 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:08:33.209 18:15:44 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:08:33.209 18:15:44 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:08:33.209 18:15:44 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:33.209 18:15:44 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:33.209 18:15:44 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:08:33.209 18:15:44 accel.accel_compare -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:08:33.209 18:15:44 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:08:33.209 18:15:44 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:33.209 18:15:44 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:33.209 18:15:44 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:33.209 18:15:44 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:33.209 18:15:44 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:33.209 18:15:44 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:08:33.209 18:15:44 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:08:33.209 [2024-07-22 18:15:44.965397] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:33.209 [2024-07-22 18:15:44.965536] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65694 ] 00:08:33.209 [2024-07-22 18:15:45.129706] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.488 [2024-07-22 18:15:45.366047] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.747 18:15:45 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:33.747 18:15:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:33.747 18:15:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:33.747 18:15:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:33.747 18:15:45 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:33.747 18:15:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:33.747 18:15:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:33.747 18:15:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:33.747 18:15:45 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:08:33.747 18:15:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:33.747 18:15:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:33.747 18:15:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:33.747 18:15:45 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:33.747 18:15:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:33.747 18:15:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:33.747 18:15:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:33.747 18:15:45 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:33.747 18:15:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:33.747 18:15:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:33.747 18:15:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:33.747 18:15:45 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:08:33.747 18:15:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:33.747 18:15:45 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:08:33.747 18:15:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:33.747 18:15:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:33.747 18:15:45 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:33.747 18:15:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:33.747 18:15:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:33.747 18:15:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:33.747 18:15:45 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:33.747 18:15:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:33.747 18:15:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:33.747 18:15:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:33.747 18:15:45 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:08:33.747 18:15:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:33.747 18:15:45 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:08:33.747 18:15:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:33.747 18:15:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:33.747 18:15:45 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:08:33.747 18:15:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:33.747 18:15:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:33.747 18:15:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:33.747 18:15:45 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:08:33.747 18:15:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:33.747 18:15:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:33.747 18:15:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:33.747 18:15:45 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:08:33.747 18:15:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:33.747 18:15:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:33.747 18:15:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:33.747 18:15:45 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:08:33.747 18:15:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:33.747 18:15:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:33.747 18:15:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:33.747 18:15:45 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:08:33.747 18:15:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:33.747 18:15:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:33.747 18:15:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:33.747 18:15:45 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:33.747 18:15:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:33.747 18:15:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:33.747 18:15:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:33.747 18:15:45 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:33.747 18:15:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:33.747 18:15:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:33.747 18:15:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:35.650 18:15:47 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:35.650 18:15:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:35.650 18:15:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:35.650 18:15:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:35.650 18:15:47 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:35.650 18:15:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:35.650 18:15:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:35.650 18:15:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:35.650 18:15:47 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:35.650 18:15:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:35.650 18:15:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:35.650 18:15:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:35.650 18:15:47 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:35.650 18:15:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:35.650 18:15:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:35.650 18:15:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:35.650 18:15:47 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:35.650 18:15:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:35.650 18:15:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:35.650 18:15:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:35.650 18:15:47 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:35.650 18:15:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:35.650 18:15:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:35.650 18:15:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:35.650 18:15:47 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:35.650 18:15:47 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:08:35.650 18:15:47 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:35.650 00:08:35.650 real 0m2.541s 00:08:35.650 user 0m2.259s 00:08:35.650 sys 0m0.184s 00:08:35.650 18:15:47 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:35.650 18:15:47 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:08:35.650 ************************************ 00:08:35.650 END TEST accel_compare 00:08:35.650 ************************************ 00:08:35.650 18:15:47 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:35.650 18:15:47 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:08:35.650 18:15:47 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:08:35.650 18:15:47 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:35.650 18:15:47 accel -- common/autotest_common.sh@10 -- # set +x 00:08:35.650 ************************************ 00:08:35.650 START TEST accel_xor 00:08:35.650 ************************************ 00:08:35.650 18:15:47 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:08:35.650 18:15:47 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:08:35.650 18:15:47 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:08:35.650 18:15:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:35.650 18:15:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:35.650 18:15:47 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:08:35.650 18:15:47 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:08:35.650 18:15:47 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:08:35.650 18:15:47 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:35.650 18:15:47 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:35.650 18:15:47 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:35.650 18:15:47 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:35.650 18:15:47 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:35.650 18:15:47 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:08:35.650 18:15:47 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:08:35.650 [2024-07-22 18:15:47.562919] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:35.650 [2024-07-22 18:15:47.563106] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65740 ] 00:08:35.909 [2024-07-22 18:15:47.742121] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.167 [2024-07-22 18:15:47.979643] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.167 18:15:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:36.167 18:15:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:36.167 18:15:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:36.167 18:15:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:36.167 18:15:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:36.167 18:15:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:36.167 18:15:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:36.167 18:15:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:36.426 18:15:48 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:08:36.426 18:15:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:36.426 18:15:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:36.426 18:15:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:36.426 18:15:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:36.426 18:15:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:36.426 18:15:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:36.426 18:15:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:36.426 18:15:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:36.426 18:15:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:36.426 18:15:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:36.426 18:15:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:36.426 18:15:48 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:08:36.426 18:15:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:36.426 18:15:48 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:08:36.426 18:15:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:36.426 18:15:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:36.426 18:15:48 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:08:36.426 18:15:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:36.426 18:15:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:36.426 18:15:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:36.426 18:15:48 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:36.426 18:15:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:36.426 18:15:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:36.426 18:15:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:36.426 18:15:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:36.426 18:15:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:36.426 18:15:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:36.426 18:15:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:36.426 18:15:48 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:08:36.426 18:15:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:36.426 18:15:48 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:08:36.426 18:15:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:36.426 18:15:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:36.426 18:15:48 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:08:36.426 18:15:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:36.426 18:15:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:36.426 18:15:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:36.426 18:15:48 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:08:36.426 18:15:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:36.426 18:15:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:36.426 18:15:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:36.426 18:15:48 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:08:36.426 18:15:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:36.426 18:15:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:36.426 18:15:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:36.426 18:15:48 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:08:36.426 18:15:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:36.426 18:15:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:36.426 18:15:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:36.426 18:15:48 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:08:36.426 18:15:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:36.426 18:15:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:36.426 18:15:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:36.426 18:15:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:36.426 18:15:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:36.426 18:15:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:36.426 18:15:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:36.426 18:15:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:36.426 18:15:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:36.426 18:15:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:36.426 18:15:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:38.329 18:15:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:38.329 18:15:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:38.329 18:15:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:38.329 18:15:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:38.329 18:15:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:38.329 18:15:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:38.329 18:15:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:38.329 18:15:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:38.329 18:15:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:38.329 18:15:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:38.329 18:15:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:38.329 18:15:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:38.329 18:15:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:38.329 18:15:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:38.329 18:15:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:38.329 18:15:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:38.329 18:15:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:38.329 18:15:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:38.329 18:15:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:38.329 18:15:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:38.329 18:15:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:38.329 18:15:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:38.329 18:15:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:38.329 18:15:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:38.329 18:15:50 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:38.329 18:15:50 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:08:38.329 18:15:50 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:38.329 00:08:38.329 real 0m2.581s 00:08:38.329 user 0m2.273s 00:08:38.329 sys 0m0.213s 00:08:38.329 18:15:50 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:38.329 18:15:50 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:08:38.329 ************************************ 00:08:38.329 END TEST accel_xor 00:08:38.329 ************************************ 00:08:38.329 18:15:50 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:38.329 18:15:50 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:08:38.329 18:15:50 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:08:38.329 18:15:50 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:38.329 18:15:50 accel -- common/autotest_common.sh@10 -- # set +x 00:08:38.329 ************************************ 00:08:38.329 START TEST accel_xor 00:08:38.329 ************************************ 00:08:38.329 18:15:50 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:08:38.329 18:15:50 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:08:38.329 18:15:50 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:08:38.329 18:15:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:38.329 18:15:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:38.329 18:15:50 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:08:38.329 18:15:50 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:08:38.329 18:15:50 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:08:38.329 18:15:50 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:38.329 18:15:50 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:38.329 18:15:50 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:38.329 18:15:50 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:38.329 18:15:50 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:38.329 18:15:50 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:08:38.329 18:15:50 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:08:38.329 [2024-07-22 18:15:50.198080] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:38.329 [2024-07-22 18:15:50.198249] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65787 ] 00:08:38.588 [2024-07-22 18:15:50.374148] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:38.847 [2024-07-22 18:15:50.608588] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.847 18:15:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:38.847 18:15:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:38.847 18:15:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:38.847 18:15:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:38.847 18:15:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:38.847 18:15:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:38.847 18:15:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:38.847 18:15:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:38.847 18:15:50 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:08:38.847 18:15:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:38.847 18:15:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:38.847 18:15:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:38.847 18:15:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:38.847 18:15:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:38.847 18:15:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:38.847 18:15:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:38.847 18:15:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:38.847 18:15:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:38.847 18:15:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:38.847 18:15:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:38.847 18:15:50 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:08:38.847 18:15:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:38.847 18:15:50 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:08:38.847 18:15:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:38.847 18:15:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:38.847 18:15:50 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:08:38.847 18:15:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:38.847 18:15:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:38.847 18:15:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:38.847 18:15:50 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:38.847 18:15:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:38.847 18:15:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:38.847 18:15:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:38.847 18:15:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:38.847 18:15:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:38.847 18:15:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:38.847 18:15:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:38.847 18:15:50 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:08:38.847 18:15:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:38.847 18:15:50 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:08:38.847 18:15:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:38.847 18:15:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:38.847 18:15:50 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:08:38.847 18:15:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:38.847 18:15:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:38.847 18:15:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:38.847 18:15:50 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:08:38.847 18:15:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:38.847 18:15:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:38.847 18:15:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:38.847 18:15:50 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:08:38.847 18:15:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:38.847 18:15:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:38.847 18:15:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:38.847 18:15:50 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:08:38.847 18:15:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:38.847 18:15:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:38.847 18:15:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:38.847 18:15:50 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:08:38.847 18:15:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:38.847 18:15:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:38.847 18:15:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:38.847 18:15:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:38.847 18:15:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:38.847 18:15:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:38.847 18:15:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:38.847 18:15:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:38.847 18:15:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:38.847 18:15:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:38.847 18:15:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:40.774 18:15:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:40.774 18:15:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:40.774 18:15:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:40.774 18:15:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:40.774 18:15:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:40.774 18:15:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:40.774 18:15:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:40.774 18:15:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:40.774 18:15:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:40.774 18:15:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:40.774 18:15:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:40.774 18:15:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:40.774 18:15:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:40.774 18:15:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:40.774 18:15:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:40.774 18:15:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:40.774 18:15:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:40.774 18:15:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:40.774 18:15:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:40.774 18:15:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:40.774 18:15:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:40.774 18:15:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:40.774 18:15:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:40.774 18:15:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:40.774 18:15:52 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:40.774 18:15:52 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:08:40.774 ************************************ 00:08:40.774 END TEST accel_xor 00:08:40.774 ************************************ 00:08:40.774 18:15:52 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:40.774 00:08:40.774 real 0m2.538s 00:08:40.774 user 0m2.237s 00:08:40.774 sys 0m0.207s 00:08:40.774 18:15:52 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:40.774 18:15:52 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:08:40.774 18:15:52 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:40.774 18:15:52 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:08:40.774 18:15:52 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:08:40.774 18:15:52 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:40.774 18:15:52 accel -- common/autotest_common.sh@10 -- # set +x 00:08:40.774 ************************************ 00:08:40.774 START TEST accel_dif_verify 00:08:40.774 ************************************ 00:08:40.774 18:15:52 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:08:40.774 18:15:52 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:08:40.774 18:15:52 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:08:40.774 18:15:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:40.774 18:15:52 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:08:40.774 18:15:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:40.774 18:15:52 accel.accel_dif_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:08:40.774 18:15:52 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:08:40.774 18:15:52 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:40.774 18:15:52 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:40.774 18:15:52 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:40.774 18:15:52 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:40.774 18:15:52 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:40.774 18:15:52 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:08:40.775 18:15:52 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:08:40.775 [2024-07-22 18:15:52.776450] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:40.775 [2024-07-22 18:15:52.776619] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65832 ] 00:08:41.033 [2024-07-22 18:15:52.956512] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.291 [2024-07-22 18:15:53.283431] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.549 18:15:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:41.549 18:15:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:41.549 18:15:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:41.549 18:15:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:41.549 18:15:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:41.549 18:15:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:41.549 18:15:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:41.549 18:15:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:41.549 18:15:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:08:41.549 18:15:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:41.549 18:15:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:41.549 18:15:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:41.549 18:15:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:41.549 18:15:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:41.549 18:15:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:41.549 18:15:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:41.549 18:15:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:41.549 18:15:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:41.549 18:15:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:41.549 18:15:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:41.549 18:15:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:08:41.549 18:15:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:41.549 18:15:53 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:08:41.549 18:15:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:41.549 18:15:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:41.549 18:15:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:41.549 18:15:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:41.549 18:15:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:41.549 18:15:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:41.549 18:15:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:41.549 18:15:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:41.549 18:15:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:41.549 18:15:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:41.549 18:15:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:08:41.549 18:15:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:41.549 18:15:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:41.549 18:15:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:41.549 18:15:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:08:41.549 18:15:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:41.549 18:15:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:41.549 18:15:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:41.549 18:15:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:41.549 18:15:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:41.549 18:15:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:41.549 18:15:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:41.549 18:15:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:08:41.549 18:15:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:41.549 18:15:53 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:08:41.549 18:15:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:41.549 18:15:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:41.549 18:15:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:08:41.549 18:15:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:41.549 18:15:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:41.549 18:15:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:41.549 18:15:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:08:41.549 18:15:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:41.549 18:15:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:41.549 18:15:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:41.549 18:15:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:08:41.549 18:15:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:41.549 18:15:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:41.549 18:15:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:41.549 18:15:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:08:41.549 18:15:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:41.549 18:15:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:41.549 18:15:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:41.549 18:15:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:08:41.549 18:15:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:41.549 18:15:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:41.549 18:15:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:41.549 18:15:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:41.549 18:15:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:41.549 18:15:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:41.549 18:15:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:41.549 18:15:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:41.549 18:15:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:41.549 18:15:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:41.549 18:15:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:44.084 18:15:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:44.084 18:15:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:44.084 18:15:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:44.084 18:15:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:44.084 18:15:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:44.084 18:15:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:44.084 18:15:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:44.084 18:15:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:44.084 18:15:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:44.084 18:15:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:44.084 18:15:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:44.084 18:15:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:44.084 18:15:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:44.084 18:15:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:44.084 18:15:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:44.084 18:15:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:44.084 18:15:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:44.084 18:15:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:44.084 18:15:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:44.084 18:15:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:44.084 18:15:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:44.084 18:15:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:44.084 18:15:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:44.084 18:15:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:44.084 18:15:55 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:44.084 18:15:55 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:08:44.084 18:15:55 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:44.084 00:08:44.084 real 0m2.797s 00:08:44.084 user 0m2.454s 00:08:44.084 sys 0m0.242s 00:08:44.084 18:15:55 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:44.084 18:15:55 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:08:44.084 ************************************ 00:08:44.084 END TEST accel_dif_verify 00:08:44.084 ************************************ 00:08:44.084 18:15:55 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:44.084 18:15:55 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:08:44.084 18:15:55 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:08:44.084 18:15:55 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:44.084 18:15:55 accel -- common/autotest_common.sh@10 -- # set +x 00:08:44.084 ************************************ 00:08:44.084 START TEST accel_dif_generate 00:08:44.084 ************************************ 00:08:44.084 18:15:55 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:08:44.084 18:15:55 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:08:44.084 18:15:55 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:08:44.084 18:15:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:44.084 18:15:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:44.084 18:15:55 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:08:44.084 18:15:55 accel.accel_dif_generate -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:08:44.084 18:15:55 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:08:44.084 18:15:55 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:44.084 18:15:55 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:44.084 18:15:55 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:44.084 18:15:55 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:44.084 18:15:55 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:44.084 18:15:55 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:08:44.084 18:15:55 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:08:44.084 [2024-07-22 18:15:55.641927] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:44.084 [2024-07-22 18:15:55.642139] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65881 ] 00:08:44.084 [2024-07-22 18:15:55.819193] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:44.343 [2024-07-22 18:15:56.116249] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:44.343 18:15:56 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:44.343 18:15:56 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:44.343 18:15:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:44.343 18:15:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:44.343 18:15:56 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:44.343 18:15:56 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:44.343 18:15:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:44.343 18:15:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:44.343 18:15:56 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:08:44.343 18:15:56 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:44.343 18:15:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:44.343 18:15:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:44.343 18:15:56 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:44.343 18:15:56 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:44.343 18:15:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:44.343 18:15:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:44.343 18:15:56 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:44.343 18:15:56 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:44.343 18:15:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:44.343 18:15:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:44.343 18:15:56 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:08:44.343 18:15:56 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:44.343 18:15:56 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:08:44.343 18:15:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:44.343 18:15:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:44.343 18:15:56 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:44.343 18:15:56 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:44.343 18:15:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:44.343 18:15:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:44.343 18:15:56 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:44.343 18:15:56 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:44.343 18:15:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:44.343 18:15:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:44.343 18:15:56 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:08:44.343 18:15:56 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:44.343 18:15:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:44.343 18:15:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:44.343 18:15:56 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:08:44.343 18:15:56 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:44.343 18:15:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:44.343 18:15:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:44.343 18:15:56 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:44.343 18:15:56 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:44.343 18:15:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:44.343 18:15:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:44.343 18:15:56 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:08:44.343 18:15:56 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:44.343 18:15:56 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:08:44.343 18:15:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:44.343 18:15:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:44.343 18:15:56 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:08:44.343 18:15:56 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:44.343 18:15:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:44.343 18:15:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:44.343 18:15:56 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:08:44.343 18:15:56 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:44.343 18:15:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:44.343 18:15:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:44.343 18:15:56 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:08:44.343 18:15:56 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:44.343 18:15:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:44.343 18:15:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:44.343 18:15:56 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:08:44.343 18:15:56 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:44.343 18:15:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:44.343 18:15:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:44.343 18:15:56 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:08:44.343 18:15:56 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:44.343 18:15:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:44.343 18:15:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:44.343 18:15:56 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:44.343 18:15:56 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:44.343 18:15:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:44.343 18:15:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:44.343 18:15:56 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:44.343 18:15:56 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:44.343 18:15:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:44.343 18:15:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:46.244 18:15:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:46.245 18:15:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:46.245 18:15:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:46.245 18:15:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:46.245 18:15:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:46.245 18:15:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:46.245 18:15:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:46.245 18:15:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:46.245 18:15:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:46.245 18:15:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:46.245 18:15:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:46.245 18:15:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:46.245 18:15:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:46.245 18:15:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:46.245 18:15:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:46.245 18:15:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:46.245 18:15:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:46.245 18:15:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:46.245 18:15:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:46.245 18:15:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:46.245 18:15:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:46.245 18:15:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:46.245 18:15:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:46.245 18:15:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:46.504 18:15:58 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:46.504 18:15:58 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:08:46.504 18:15:58 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:46.504 00:08:46.504 real 0m2.697s 00:08:46.504 user 0m2.384s 00:08:46.504 sys 0m0.212s 00:08:46.504 18:15:58 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:46.504 18:15:58 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:08:46.504 ************************************ 00:08:46.504 END TEST accel_dif_generate 00:08:46.504 ************************************ 00:08:46.504 18:15:58 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:46.504 18:15:58 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:08:46.504 18:15:58 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:08:46.504 18:15:58 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:46.504 18:15:58 accel -- common/autotest_common.sh@10 -- # set +x 00:08:46.504 ************************************ 00:08:46.504 START TEST accel_dif_generate_copy 00:08:46.504 ************************************ 00:08:46.504 18:15:58 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:08:46.504 18:15:58 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:08:46.504 18:15:58 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:08:46.504 18:15:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:46.504 18:15:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:46.504 18:15:58 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:08:46.504 18:15:58 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:08:46.504 18:15:58 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:08:46.504 18:15:58 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:46.504 18:15:58 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:46.504 18:15:58 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:46.504 18:15:58 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:46.504 18:15:58 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:46.504 18:15:58 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:08:46.504 18:15:58 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:08:46.504 [2024-07-22 18:15:58.390888] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:46.504 [2024-07-22 18:15:58.391090] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65933 ] 00:08:46.762 [2024-07-22 18:15:58.569945] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:47.020 [2024-07-22 18:15:58.861422] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.278 18:15:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:47.278 18:15:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:47.278 18:15:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:47.279 18:15:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:47.279 18:15:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:47.279 18:15:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:47.279 18:15:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:47.279 18:15:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:47.279 18:15:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:08:47.279 18:15:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:47.279 18:15:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:47.279 18:15:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:47.279 18:15:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:47.279 18:15:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:47.279 18:15:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:47.279 18:15:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:47.279 18:15:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:47.279 18:15:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:47.279 18:15:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:47.279 18:15:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:47.279 18:15:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:08:47.279 18:15:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:47.279 18:15:59 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:08:47.279 18:15:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:47.279 18:15:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:47.279 18:15:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:47.279 18:15:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:47.279 18:15:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:47.279 18:15:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:47.279 18:15:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:47.279 18:15:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:47.279 18:15:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:47.279 18:15:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:47.279 18:15:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:47.279 18:15:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:47.279 18:15:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:47.279 18:15:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:47.279 18:15:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:08:47.279 18:15:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:47.279 18:15:59 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:08:47.279 18:15:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:47.279 18:15:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:47.279 18:15:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:08:47.279 18:15:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:47.279 18:15:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:47.279 18:15:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:47.279 18:15:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:08:47.279 18:15:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:47.279 18:15:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:47.279 18:15:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:47.279 18:15:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:08:47.279 18:15:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:47.279 18:15:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:47.279 18:15:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:47.279 18:15:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:08:47.279 18:15:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:47.279 18:15:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:47.279 18:15:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:47.279 18:15:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:08:47.279 18:15:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:47.279 18:15:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:47.279 18:15:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:47.279 18:15:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:47.279 18:15:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:47.279 18:15:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:47.279 18:15:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:47.279 18:15:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:47.279 18:15:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:47.279 18:15:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:47.279 18:15:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:49.231 18:16:00 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:49.231 18:16:00 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:49.231 18:16:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:49.231 18:16:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:49.231 18:16:00 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:49.231 18:16:00 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:49.231 18:16:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:49.231 18:16:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:49.231 18:16:00 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:49.231 18:16:00 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:49.231 18:16:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:49.231 18:16:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:49.231 18:16:00 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:49.231 18:16:00 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:49.231 18:16:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:49.231 18:16:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:49.231 18:16:00 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:49.231 18:16:00 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:49.231 18:16:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:49.231 18:16:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:49.231 18:16:00 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:49.231 18:16:00 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:49.231 18:16:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:49.231 18:16:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:49.231 18:16:00 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:49.231 18:16:00 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:08:49.231 18:16:00 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:49.231 00:08:49.231 real 0m2.625s 00:08:49.231 user 0m2.315s 00:08:49.231 sys 0m0.216s 00:08:49.231 18:16:00 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:49.231 ************************************ 00:08:49.231 END TEST accel_dif_generate_copy 00:08:49.231 ************************************ 00:08:49.231 18:16:00 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:08:49.231 18:16:01 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:49.231 18:16:01 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:08:49.231 18:16:01 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:49.231 18:16:01 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:08:49.231 18:16:01 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:49.231 18:16:01 accel -- common/autotest_common.sh@10 -- # set +x 00:08:49.231 ************************************ 00:08:49.231 START TEST accel_comp 00:08:49.231 ************************************ 00:08:49.231 18:16:01 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:49.231 18:16:01 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:08:49.231 18:16:01 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:08:49.231 18:16:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:49.231 18:16:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:49.231 18:16:01 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:49.231 18:16:01 accel.accel_comp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:49.231 18:16:01 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:08:49.231 18:16:01 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:49.231 18:16:01 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:49.231 18:16:01 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:49.231 18:16:01 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:49.231 18:16:01 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:49.231 18:16:01 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:08:49.231 18:16:01 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:08:49.231 [2024-07-22 18:16:01.071868] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:49.231 [2024-07-22 18:16:01.072045] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65974 ] 00:08:49.490 [2024-07-22 18:16:01.252740] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:49.490 [2024-07-22 18:16:01.500970] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.748 18:16:01 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:49.748 18:16:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:49.748 18:16:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:49.748 18:16:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:49.748 18:16:01 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:49.748 18:16:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:49.748 18:16:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:49.748 18:16:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:49.748 18:16:01 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:49.748 18:16:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:49.748 18:16:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:49.748 18:16:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:49.748 18:16:01 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:08:49.748 18:16:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:49.749 18:16:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:49.749 18:16:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:49.749 18:16:01 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:49.749 18:16:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:49.749 18:16:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:49.749 18:16:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:49.749 18:16:01 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:49.749 18:16:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:49.749 18:16:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:49.749 18:16:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:49.749 18:16:01 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:08:49.749 18:16:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:49.749 18:16:01 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:08:49.749 18:16:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:49.749 18:16:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:49.749 18:16:01 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:49.749 18:16:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:49.749 18:16:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:49.749 18:16:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:49.749 18:16:01 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:49.749 18:16:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:49.749 18:16:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:49.749 18:16:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:49.749 18:16:01 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:08:49.749 18:16:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:49.749 18:16:01 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:08:49.749 18:16:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:49.749 18:16:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:49.749 18:16:01 accel.accel_comp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:49.749 18:16:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:49.749 18:16:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:49.749 18:16:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:49.749 18:16:01 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:08:49.749 18:16:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:49.749 18:16:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:49.749 18:16:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:49.749 18:16:01 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:08:49.749 18:16:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:49.749 18:16:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:49.749 18:16:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:49.749 18:16:01 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:08:49.749 18:16:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:49.749 18:16:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:49.749 18:16:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:49.749 18:16:01 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:08:49.749 18:16:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:49.749 18:16:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:49.749 18:16:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:49.749 18:16:01 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:08:49.749 18:16:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:49.749 18:16:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:49.749 18:16:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:49.749 18:16:01 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:49.749 18:16:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:49.749 18:16:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:49.749 18:16:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:49.749 18:16:01 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:49.749 18:16:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:49.749 18:16:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:49.749 18:16:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:52.279 18:16:03 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:52.279 18:16:03 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:52.279 18:16:03 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:52.279 18:16:03 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:52.279 18:16:03 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:52.279 18:16:03 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:52.279 18:16:03 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:52.279 18:16:03 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:52.279 18:16:03 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:52.279 18:16:03 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:52.279 18:16:03 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:52.279 18:16:03 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:52.279 18:16:03 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:52.279 18:16:03 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:52.279 18:16:03 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:52.279 18:16:03 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:52.279 18:16:03 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:52.279 18:16:03 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:52.279 18:16:03 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:52.279 18:16:03 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:52.279 18:16:03 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:52.279 18:16:03 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:52.279 18:16:03 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:52.279 18:16:03 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:52.279 18:16:03 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:52.279 18:16:03 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:08:52.279 18:16:03 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:52.279 00:08:52.279 real 0m2.716s 00:08:52.279 user 0m2.412s 00:08:52.279 sys 0m0.201s 00:08:52.279 18:16:03 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:52.279 18:16:03 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:08:52.279 ************************************ 00:08:52.279 END TEST accel_comp 00:08:52.279 ************************************ 00:08:52.279 18:16:03 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:52.279 18:16:03 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:52.279 18:16:03 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:08:52.279 18:16:03 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:52.279 18:16:03 accel -- common/autotest_common.sh@10 -- # set +x 00:08:52.279 ************************************ 00:08:52.279 START TEST accel_decomp 00:08:52.279 ************************************ 00:08:52.279 18:16:03 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:52.279 18:16:03 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:08:52.279 18:16:03 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:08:52.279 18:16:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:52.279 18:16:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:52.279 18:16:03 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:52.279 18:16:03 accel.accel_decomp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:52.279 18:16:03 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:08:52.279 18:16:03 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:52.279 18:16:03 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:52.279 18:16:03 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:52.279 18:16:03 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:52.279 18:16:03 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:52.279 18:16:03 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:08:52.279 18:16:03 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:08:52.279 [2024-07-22 18:16:03.841585] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:52.279 [2024-07-22 18:16:03.841753] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66026 ] 00:08:52.280 [2024-07-22 18:16:04.016896] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.538 [2024-07-22 18:16:04.342412] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.796 18:16:04 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:52.796 18:16:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:52.796 18:16:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:52.796 18:16:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:52.796 18:16:04 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:52.796 18:16:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:52.796 18:16:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:52.796 18:16:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:52.796 18:16:04 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:52.796 18:16:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:52.796 18:16:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:52.796 18:16:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:52.796 18:16:04 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:08:52.796 18:16:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:52.796 18:16:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:52.796 18:16:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:52.796 18:16:04 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:52.796 18:16:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:52.796 18:16:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:52.796 18:16:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:52.796 18:16:04 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:52.796 18:16:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:52.796 18:16:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:52.796 18:16:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:52.796 18:16:04 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:08:52.796 18:16:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:52.796 18:16:04 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:52.796 18:16:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:52.796 18:16:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:52.796 18:16:04 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:52.796 18:16:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:52.796 18:16:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:52.796 18:16:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:52.796 18:16:04 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:52.796 18:16:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:52.796 18:16:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:52.796 18:16:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:52.796 18:16:04 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:08:52.796 18:16:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:52.797 18:16:04 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:08:52.797 18:16:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:52.797 18:16:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:52.797 18:16:04 accel.accel_decomp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:52.797 18:16:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:52.797 18:16:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:52.797 18:16:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:52.797 18:16:04 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:08:52.797 18:16:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:52.797 18:16:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:52.797 18:16:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:52.797 18:16:04 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:08:52.797 18:16:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:52.797 18:16:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:52.797 18:16:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:52.797 18:16:04 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:08:52.797 18:16:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:52.797 18:16:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:52.797 18:16:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:52.797 18:16:04 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:08:52.797 18:16:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:52.797 18:16:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:52.797 18:16:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:52.797 18:16:04 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:08:52.797 18:16:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:52.797 18:16:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:52.797 18:16:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:52.797 18:16:04 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:52.797 18:16:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:52.797 18:16:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:52.797 18:16:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:52.797 18:16:04 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:52.797 18:16:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:52.797 18:16:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:52.797 18:16:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:54.698 18:16:06 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:54.698 18:16:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:54.698 18:16:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:54.698 18:16:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:54.698 18:16:06 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:54.698 18:16:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:54.698 18:16:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:54.698 18:16:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:54.698 18:16:06 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:54.698 18:16:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:54.698 18:16:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:54.698 18:16:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:54.698 18:16:06 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:54.698 18:16:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:54.698 18:16:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:54.698 18:16:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:54.698 18:16:06 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:54.698 18:16:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:54.698 18:16:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:54.698 18:16:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:54.698 18:16:06 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:54.698 18:16:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:54.698 18:16:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:54.698 18:16:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:54.698 18:16:06 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:54.698 18:16:06 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:54.698 18:16:06 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:54.698 00:08:54.698 real 0m2.788s 00:08:54.698 user 0m2.448s 00:08:54.698 sys 0m0.238s 00:08:54.698 18:16:06 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:54.698 ************************************ 00:08:54.698 END TEST accel_decomp 00:08:54.698 18:16:06 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:08:54.698 ************************************ 00:08:54.698 18:16:06 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:54.698 18:16:06 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:08:54.698 18:16:06 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:08:54.698 18:16:06 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:54.698 18:16:06 accel -- common/autotest_common.sh@10 -- # set +x 00:08:54.698 ************************************ 00:08:54.698 START TEST accel_decomp_full 00:08:54.698 ************************************ 00:08:54.698 18:16:06 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:08:54.698 18:16:06 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:08:54.698 18:16:06 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:08:54.698 18:16:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:54.698 18:16:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:54.698 18:16:06 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:08:54.698 18:16:06 accel.accel_decomp_full -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:08:54.698 18:16:06 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:08:54.698 18:16:06 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:54.698 18:16:06 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:54.698 18:16:06 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:54.698 18:16:06 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:54.698 18:16:06 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:54.698 18:16:06 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:08:54.698 18:16:06 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:08:54.698 [2024-07-22 18:16:06.681908] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:54.699 [2024-07-22 18:16:06.682099] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66077 ] 00:08:54.957 [2024-07-22 18:16:06.849177] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:55.216 [2024-07-22 18:16:07.126050] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:55.475 18:16:07 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:55.475 18:16:07 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:55.475 18:16:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:55.475 18:16:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:55.475 18:16:07 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:55.475 18:16:07 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:55.475 18:16:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:55.475 18:16:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:55.475 18:16:07 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:55.475 18:16:07 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:55.475 18:16:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:55.475 18:16:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:55.475 18:16:07 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:08:55.475 18:16:07 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:55.475 18:16:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:55.475 18:16:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:55.475 18:16:07 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:55.475 18:16:07 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:55.475 18:16:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:55.475 18:16:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:55.475 18:16:07 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:55.475 18:16:07 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:55.475 18:16:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:55.475 18:16:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:55.475 18:16:07 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:08:55.475 18:16:07 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:55.475 18:16:07 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:55.475 18:16:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:55.475 18:16:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:55.475 18:16:07 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:55.475 18:16:07 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:55.475 18:16:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:55.475 18:16:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:55.475 18:16:07 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:55.475 18:16:07 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:55.475 18:16:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:55.475 18:16:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:55.475 18:16:07 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:08:55.476 18:16:07 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:55.476 18:16:07 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:08:55.476 18:16:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:55.476 18:16:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:55.476 18:16:07 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:55.476 18:16:07 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:55.476 18:16:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:55.476 18:16:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:55.476 18:16:07 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:08:55.476 18:16:07 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:55.476 18:16:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:55.476 18:16:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:55.476 18:16:07 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:08:55.476 18:16:07 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:55.476 18:16:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:55.476 18:16:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:55.476 18:16:07 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:08:55.476 18:16:07 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:55.476 18:16:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:55.476 18:16:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:55.476 18:16:07 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:08:55.476 18:16:07 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:55.476 18:16:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:55.476 18:16:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:55.476 18:16:07 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:08:55.476 18:16:07 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:55.476 18:16:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:55.476 18:16:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:55.476 18:16:07 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:55.476 18:16:07 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:55.476 18:16:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:55.476 18:16:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:55.476 18:16:07 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:55.476 18:16:07 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:55.476 18:16:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:55.476 18:16:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:58.005 18:16:09 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:58.005 18:16:09 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:58.005 18:16:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:58.005 18:16:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:58.005 18:16:09 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:58.005 18:16:09 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:58.005 18:16:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:58.005 18:16:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:58.005 18:16:09 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:58.005 18:16:09 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:58.005 18:16:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:58.005 18:16:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:58.005 18:16:09 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:58.005 18:16:09 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:58.005 18:16:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:58.005 18:16:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:58.005 18:16:09 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:58.005 18:16:09 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:58.005 18:16:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:58.005 18:16:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:58.005 18:16:09 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:58.005 18:16:09 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:58.005 18:16:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:58.005 18:16:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:58.005 18:16:09 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:58.005 18:16:09 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:58.005 18:16:09 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:58.005 00:08:58.005 real 0m2.832s 00:08:58.005 user 0m2.485s 00:08:58.005 sys 0m0.244s 00:08:58.005 18:16:09 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:58.005 18:16:09 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:08:58.005 ************************************ 00:08:58.005 END TEST accel_decomp_full 00:08:58.005 ************************************ 00:08:58.005 18:16:09 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:58.005 18:16:09 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:08:58.005 18:16:09 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:08:58.005 18:16:09 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:58.005 18:16:09 accel -- common/autotest_common.sh@10 -- # set +x 00:08:58.005 ************************************ 00:08:58.005 START TEST accel_decomp_mcore 00:08:58.005 ************************************ 00:08:58.005 18:16:09 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:08:58.005 18:16:09 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:08:58.005 18:16:09 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:08:58.005 18:16:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:58.005 18:16:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:58.005 18:16:09 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:08:58.005 18:16:09 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:08:58.005 18:16:09 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:08:58.005 18:16:09 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:58.005 18:16:09 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:58.005 18:16:09 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:58.005 18:16:09 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:58.005 18:16:09 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:58.005 18:16:09 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:08:58.005 18:16:09 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:08:58.005 [2024-07-22 18:16:09.563823] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:58.005 [2024-07-22 18:16:09.564006] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66125 ] 00:08:58.005 [2024-07-22 18:16:09.731748] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:58.005 [2024-07-22 18:16:10.015968] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:58.005 [2024-07-22 18:16:10.016090] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:58.005 [2024-07-22 18:16:10.016293] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:58.005 [2024-07-22 18:16:10.016299] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:58.264 18:16:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:58.264 18:16:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:58.264 18:16:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:58.264 18:16:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:58.264 18:16:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:58.264 18:16:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:58.264 18:16:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:58.264 18:16:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:58.264 18:16:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:58.264 18:16:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:58.264 18:16:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:58.264 18:16:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:58.264 18:16:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:08:58.264 18:16:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:58.264 18:16:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:58.264 18:16:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:58.264 18:16:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:58.264 18:16:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:58.264 18:16:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:58.264 18:16:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:58.264 18:16:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:58.264 18:16:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:58.264 18:16:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:58.264 18:16:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:58.264 18:16:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:08:58.264 18:16:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:58.264 18:16:10 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:58.264 18:16:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:58.264 18:16:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:58.264 18:16:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:58.264 18:16:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:58.264 18:16:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:58.264 18:16:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:58.264 18:16:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:58.264 18:16:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:58.264 18:16:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:58.264 18:16:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:58.264 18:16:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:08:58.264 18:16:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:58.264 18:16:10 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:08:58.264 18:16:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:58.264 18:16:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:58.264 18:16:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:58.264 18:16:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:58.264 18:16:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:58.264 18:16:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:58.264 18:16:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:08:58.264 18:16:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:58.264 18:16:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:58.264 18:16:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:58.264 18:16:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:08:58.264 18:16:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:58.264 18:16:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:58.264 18:16:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:58.264 18:16:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:08:58.264 18:16:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:58.264 18:16:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:58.264 18:16:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:58.264 18:16:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:08:58.264 18:16:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:58.264 18:16:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:58.264 18:16:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:58.264 18:16:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:08:58.264 18:16:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:58.264 18:16:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:58.264 18:16:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:58.264 18:16:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:58.264 18:16:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:58.264 18:16:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:58.264 18:16:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:58.264 18:16:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:58.264 18:16:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:58.264 18:16:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:58.264 18:16:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:00.793 18:16:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:00.793 18:16:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:00.793 18:16:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:00.793 18:16:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:00.793 18:16:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:00.793 18:16:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:00.793 18:16:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:00.793 18:16:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:00.793 18:16:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:00.793 18:16:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:00.793 18:16:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:00.793 18:16:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:00.793 18:16:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:00.793 18:16:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:00.793 18:16:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:00.793 18:16:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:00.793 18:16:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:00.793 18:16:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:00.793 18:16:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:00.793 18:16:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:00.793 18:16:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:00.793 18:16:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:00.793 18:16:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:00.793 18:16:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:00.793 18:16:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:00.793 18:16:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:00.793 18:16:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:00.793 18:16:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:00.793 18:16:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:00.793 18:16:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:00.793 18:16:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:00.793 18:16:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:00.793 18:16:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:00.793 18:16:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:00.793 18:16:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:00.793 18:16:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:00.793 18:16:12 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:00.793 18:16:12 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:09:00.793 18:16:12 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:00.793 00:09:00.793 real 0m2.745s 00:09:00.793 user 0m0.016s 00:09:00.793 sys 0m0.004s 00:09:00.793 18:16:12 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:00.793 18:16:12 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:09:00.793 ************************************ 00:09:00.793 END TEST accel_decomp_mcore 00:09:00.793 ************************************ 00:09:00.793 18:16:12 accel -- common/autotest_common.sh@1142 -- # return 0 00:09:00.793 18:16:12 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:09:00.793 18:16:12 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:09:00.793 18:16:12 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:00.793 18:16:12 accel -- common/autotest_common.sh@10 -- # set +x 00:09:00.793 ************************************ 00:09:00.793 START TEST accel_decomp_full_mcore 00:09:00.793 ************************************ 00:09:00.794 18:16:12 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:09:00.794 18:16:12 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:09:00.794 18:16:12 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:09:00.794 18:16:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:00.794 18:16:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:00.794 18:16:12 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:09:00.794 18:16:12 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:09:00.794 18:16:12 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:09:00.794 18:16:12 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:00.794 18:16:12 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:00.794 18:16:12 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:00.794 18:16:12 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:00.794 18:16:12 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:00.794 18:16:12 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:09:00.794 18:16:12 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:09:00.794 [2024-07-22 18:16:12.366811] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:09:00.794 [2024-07-22 18:16:12.367009] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66174 ] 00:09:00.794 [2024-07-22 18:16:12.546409] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:01.052 [2024-07-22 18:16:12.817770] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:01.052 [2024-07-22 18:16:12.817995] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:01.052 [2024-07-22 18:16:12.818064] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:01.052 [2024-07-22 18:16:12.818072] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:01.052 18:16:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:01.052 18:16:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:01.052 18:16:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:01.052 18:16:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:01.052 18:16:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:01.052 18:16:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:01.052 18:16:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:01.052 18:16:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:01.052 18:16:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:01.052 18:16:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:01.052 18:16:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:01.052 18:16:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:01.052 18:16:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:09:01.052 18:16:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:01.052 18:16:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:01.052 18:16:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:01.052 18:16:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:01.052 18:16:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:01.052 18:16:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:01.052 18:16:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:01.052 18:16:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:01.052 18:16:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:01.052 18:16:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:01.052 18:16:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:01.052 18:16:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:09:01.052 18:16:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:01.052 18:16:13 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:09:01.052 18:16:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:01.052 18:16:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:01.052 18:16:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:09:01.052 18:16:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:01.052 18:16:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:01.052 18:16:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:01.052 18:16:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:01.052 18:16:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:01.052 18:16:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:01.052 18:16:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:01.052 18:16:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:09:01.052 18:16:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:01.052 18:16:13 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:09:01.052 18:16:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:01.052 18:16:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:01.052 18:16:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:09:01.052 18:16:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:01.052 18:16:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:01.052 18:16:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:01.052 18:16:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:09:01.052 18:16:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:01.052 18:16:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:01.052 18:16:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:01.052 18:16:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:09:01.052 18:16:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:01.052 18:16:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:01.052 18:16:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:01.052 18:16:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:09:01.052 18:16:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:01.052 18:16:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:01.052 18:16:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:01.052 18:16:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:09:01.052 18:16:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:01.052 18:16:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:01.052 18:16:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:01.052 18:16:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:09:01.052 18:16:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:01.052 18:16:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:01.052 18:16:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:01.052 18:16:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:01.052 18:16:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:01.052 18:16:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:01.052 18:16:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:01.052 18:16:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:01.052 18:16:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:01.052 18:16:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:01.053 18:16:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:02.953 18:16:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:02.953 18:16:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:02.953 18:16:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:02.953 18:16:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:02.953 18:16:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:02.953 18:16:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:02.953 18:16:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:02.953 18:16:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:02.953 18:16:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:02.953 18:16:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:02.953 18:16:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:02.953 18:16:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:03.212 18:16:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:03.212 18:16:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:03.212 18:16:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:03.212 18:16:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:03.212 18:16:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:03.212 18:16:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:03.212 18:16:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:03.212 18:16:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:03.212 18:16:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:03.212 18:16:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:03.212 18:16:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:03.212 18:16:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:03.212 18:16:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:03.212 18:16:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:03.212 18:16:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:03.212 18:16:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:03.212 18:16:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:03.212 18:16:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:03.212 18:16:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:03.212 18:16:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:03.212 18:16:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:03.212 18:16:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:03.212 18:16:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:03.212 18:16:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:03.212 18:16:14 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:03.212 18:16:14 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:09:03.212 18:16:14 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:03.212 00:09:03.212 real 0m2.682s 00:09:03.212 user 0m0.019s 00:09:03.212 sys 0m0.005s 00:09:03.212 18:16:14 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:03.212 ************************************ 00:09:03.212 END TEST accel_decomp_full_mcore 00:09:03.212 ************************************ 00:09:03.212 18:16:14 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:09:03.212 18:16:15 accel -- common/autotest_common.sh@1142 -- # return 0 00:09:03.212 18:16:15 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:09:03.212 18:16:15 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:09:03.212 18:16:15 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:03.212 18:16:15 accel -- common/autotest_common.sh@10 -- # set +x 00:09:03.212 ************************************ 00:09:03.212 START TEST accel_decomp_mthread 00:09:03.212 ************************************ 00:09:03.212 18:16:15 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:09:03.213 18:16:15 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:09:03.213 18:16:15 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:09:03.213 18:16:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:03.213 18:16:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:03.213 18:16:15 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:09:03.213 18:16:15 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:09:03.213 18:16:15 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:09:03.213 18:16:15 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:03.213 18:16:15 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:03.213 18:16:15 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:03.213 18:16:15 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:03.213 18:16:15 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:03.213 18:16:15 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:09:03.213 18:16:15 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:09:03.213 [2024-07-22 18:16:15.093138] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:09:03.213 [2024-07-22 18:16:15.093296] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66224 ] 00:09:03.471 [2024-07-22 18:16:15.263905] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:03.729 [2024-07-22 18:16:15.547116] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:03.988 18:16:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:03.988 18:16:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:03.988 18:16:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:03.988 18:16:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:03.988 18:16:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:03.988 18:16:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:03.988 18:16:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:03.988 18:16:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:03.988 18:16:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:03.988 18:16:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:03.988 18:16:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:03.988 18:16:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:03.988 18:16:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:09:03.988 18:16:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:03.988 18:16:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:03.988 18:16:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:03.988 18:16:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:03.988 18:16:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:03.988 18:16:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:03.988 18:16:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:03.988 18:16:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:03.988 18:16:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:03.988 18:16:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:03.988 18:16:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:03.988 18:16:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:09:03.988 18:16:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:03.988 18:16:15 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:09:03.988 18:16:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:03.988 18:16:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:03.988 18:16:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:03.988 18:16:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:03.988 18:16:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:03.989 18:16:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:03.989 18:16:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:03.989 18:16:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:03.989 18:16:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:03.989 18:16:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:03.989 18:16:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:09:03.989 18:16:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:03.989 18:16:15 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:09:03.989 18:16:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:03.989 18:16:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:03.989 18:16:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:09:03.989 18:16:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:03.989 18:16:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:03.989 18:16:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:03.989 18:16:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:09:03.989 18:16:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:03.989 18:16:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:03.989 18:16:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:03.989 18:16:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:09:03.989 18:16:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:03.989 18:16:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:03.989 18:16:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:03.989 18:16:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:09:03.989 18:16:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:03.989 18:16:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:03.989 18:16:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:03.989 18:16:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:09:03.989 18:16:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:03.989 18:16:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:03.989 18:16:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:03.989 18:16:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:09:03.989 18:16:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:03.989 18:16:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:03.989 18:16:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:03.989 18:16:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:03.989 18:16:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:03.989 18:16:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:03.989 18:16:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:03.989 18:16:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:03.989 18:16:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:03.989 18:16:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:03.989 18:16:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:05.893 18:16:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:05.893 18:16:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:05.893 18:16:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:05.893 18:16:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:05.893 18:16:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:05.893 18:16:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:05.893 18:16:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:05.893 18:16:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:05.893 18:16:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:05.893 18:16:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:05.893 18:16:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:05.893 18:16:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:05.893 18:16:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:05.893 18:16:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:05.893 18:16:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:05.893 18:16:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:05.893 18:16:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:05.893 18:16:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:05.893 18:16:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:05.893 18:16:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:05.893 18:16:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:05.893 18:16:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:05.893 18:16:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:05.893 18:16:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:05.893 18:16:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:05.893 18:16:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:05.893 18:16:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:05.893 18:16:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:05.893 18:16:17 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:05.893 ************************************ 00:09:05.893 END TEST accel_decomp_mthread 00:09:05.893 ************************************ 00:09:05.893 18:16:17 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:09:05.893 18:16:17 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:05.893 00:09:05.893 real 0m2.733s 00:09:05.893 user 0m2.438s 00:09:05.893 sys 0m0.198s 00:09:05.893 18:16:17 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:05.893 18:16:17 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:09:05.893 18:16:17 accel -- common/autotest_common.sh@1142 -- # return 0 00:09:05.893 18:16:17 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:09:05.893 18:16:17 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:09:05.893 18:16:17 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:05.893 18:16:17 accel -- common/autotest_common.sh@10 -- # set +x 00:09:05.893 ************************************ 00:09:05.893 START TEST accel_decomp_full_mthread 00:09:05.893 ************************************ 00:09:05.893 18:16:17 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:09:05.893 18:16:17 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:09:05.893 18:16:17 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:09:05.893 18:16:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:05.893 18:16:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:05.893 18:16:17 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:09:05.893 18:16:17 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:09:05.893 18:16:17 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:09:05.893 18:16:17 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:05.893 18:16:17 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:05.893 18:16:17 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:05.893 18:16:17 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:05.893 18:16:17 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:05.893 18:16:17 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:09:05.893 18:16:17 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:09:05.893 [2024-07-22 18:16:17.890794] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:09:05.893 [2024-07-22 18:16:17.891059] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66276 ] 00:09:06.151 [2024-07-22 18:16:18.070916] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:06.410 [2024-07-22 18:16:18.350926] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:06.669 18:16:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:06.669 18:16:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:06.669 18:16:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:06.669 18:16:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:06.669 18:16:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:06.669 18:16:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:06.669 18:16:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:06.669 18:16:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:06.669 18:16:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:06.669 18:16:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:06.669 18:16:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:06.669 18:16:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:06.669 18:16:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:09:06.669 18:16:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:06.669 18:16:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:06.669 18:16:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:06.669 18:16:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:06.669 18:16:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:06.669 18:16:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:06.669 18:16:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:06.669 18:16:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:06.669 18:16:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:06.669 18:16:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:06.669 18:16:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:06.669 18:16:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:09:06.669 18:16:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:06.669 18:16:18 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:09:06.669 18:16:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:06.669 18:16:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:06.669 18:16:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:09:06.669 18:16:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:06.669 18:16:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:06.669 18:16:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:06.669 18:16:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:06.669 18:16:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:06.669 18:16:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:06.669 18:16:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:06.669 18:16:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:09:06.669 18:16:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:06.669 18:16:18 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:09:06.669 18:16:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:06.669 18:16:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:06.669 18:16:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:09:06.669 18:16:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:06.669 18:16:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:06.669 18:16:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:06.669 18:16:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:09:06.669 18:16:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:06.669 18:16:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:06.669 18:16:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:06.669 18:16:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:09:06.669 18:16:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:06.669 18:16:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:06.669 18:16:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:06.669 18:16:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:09:06.669 18:16:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:06.669 18:16:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:06.669 18:16:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:06.669 18:16:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:09:06.669 18:16:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:06.669 18:16:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:06.669 18:16:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:06.669 18:16:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:09:06.669 18:16:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:06.669 18:16:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:06.669 18:16:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:06.669 18:16:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:06.669 18:16:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:06.669 18:16:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:06.669 18:16:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:06.669 18:16:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:06.669 18:16:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:06.669 18:16:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:06.669 18:16:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:08.615 18:16:20 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:08.615 18:16:20 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:08.615 18:16:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:08.615 18:16:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:08.615 18:16:20 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:08.615 18:16:20 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:08.615 18:16:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:08.615 18:16:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:08.615 18:16:20 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:08.615 18:16:20 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:08.615 18:16:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:08.615 18:16:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:08.615 18:16:20 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:08.615 18:16:20 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:08.615 18:16:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:08.615 18:16:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:08.615 18:16:20 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:08.615 18:16:20 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:08.615 18:16:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:08.615 18:16:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:08.615 18:16:20 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:08.615 18:16:20 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:08.615 18:16:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:08.615 18:16:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:08.615 18:16:20 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:08.615 18:16:20 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:08.615 18:16:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:08.615 18:16:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:08.874 18:16:20 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:08.874 18:16:20 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:09:08.874 18:16:20 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:08.874 00:09:08.874 real 0m2.804s 00:09:08.874 user 0m2.462s 00:09:08.874 sys 0m0.242s 00:09:08.874 18:16:20 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:08.874 18:16:20 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:09:08.874 ************************************ 00:09:08.874 END TEST accel_decomp_full_mthread 00:09:08.874 ************************************ 00:09:08.874 18:16:20 accel -- common/autotest_common.sh@1142 -- # return 0 00:09:08.874 18:16:20 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:09:08.874 18:16:20 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:09:08.874 18:16:20 accel -- accel/accel.sh@137 -- # build_accel_config 00:09:08.874 18:16:20 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:08.874 18:16:20 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:09:08.874 18:16:20 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:08.874 18:16:20 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:08.874 18:16:20 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:08.874 18:16:20 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:08.874 18:16:20 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:08.874 18:16:20 accel -- accel/accel.sh@40 -- # local IFS=, 00:09:08.874 18:16:20 accel -- accel/accel.sh@41 -- # jq -r . 00:09:08.874 18:16:20 accel -- common/autotest_common.sh@10 -- # set +x 00:09:08.874 ************************************ 00:09:08.874 START TEST accel_dif_functional_tests 00:09:08.874 ************************************ 00:09:08.874 18:16:20 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:09:08.874 [2024-07-22 18:16:20.806880] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:09:08.874 [2024-07-22 18:16:20.807313] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66329 ] 00:09:09.133 [2024-07-22 18:16:20.984098] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:09.391 [2024-07-22 18:16:21.262766] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:09.391 [2024-07-22 18:16:21.262959] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:09.391 [2024-07-22 18:16:21.263259] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:09.649 00:09:09.649 00:09:09.649 CUnit - A unit testing framework for C - Version 2.1-3 00:09:09.649 http://cunit.sourceforge.net/ 00:09:09.649 00:09:09.649 00:09:09.649 Suite: accel_dif 00:09:09.649 Test: verify: DIF generated, GUARD check ...passed 00:09:09.649 Test: verify: DIF generated, APPTAG check ...passed 00:09:09.649 Test: verify: DIF generated, REFTAG check ...passed 00:09:09.649 Test: verify: DIF not generated, GUARD check ...passed 00:09:09.649 Test: verify: DIF not generated, APPTAG check ...[2024-07-22 18:16:21.625326] dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:09:09.649 [2024-07-22 18:16:21.625479] dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:09:09.649 passed 00:09:09.649 Test: verify: DIF not generated, REFTAG check ...passed 00:09:09.649 Test: verify: APPTAG correct, APPTAG check ...passed 00:09:09.649 Test: verify: APPTAG incorrect, APPTAG check ...passed[2024-07-22 18:16:21.625645] dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:09:09.649 [2024-07-22 18:16:21.625780] dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:09:09.649 00:09:09.649 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:09:09.649 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:09:09.649 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:09:09.649 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-22 18:16:21.626335] dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:09:09.649 passed 00:09:09.649 Test: verify copy: DIF generated, GUARD check ...passed 00:09:09.649 Test: verify copy: DIF generated, APPTAG check ...passed 00:09:09.649 Test: verify copy: DIF generated, REFTAG check ...passed 00:09:09.649 Test: verify copy: DIF not generated, GUARD check ...passed 00:09:09.649 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-22 18:16:21.627028] dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:09:09.649 passed 00:09:09.649 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-22 18:16:21.627151] dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:09:09.649 [2024-07-22 18:16:21.627217] dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:09:09.649 passed 00:09:09.649 Test: generate copy: DIF generated, GUARD check ...passed 00:09:09.649 Test: generate copy: DIF generated, APTTAG check ...passed 00:09:09.649 Test: generate copy: DIF generated, REFTAG check ...passed 00:09:09.649 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:09:09.649 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:09:09.649 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:09:09.649 Test: generate copy: iovecs-len validate ...passed 00:09:09.649 Test: generate copy: buffer alignment validate ...[2024-07-22 18:16:21.628107] dif.c:1225:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:09:09.649 passed 00:09:09.649 00:09:09.649 Run Summary: Type Total Ran Passed Failed Inactive 00:09:09.649 suites 1 1 n/a 0 0 00:09:09.649 tests 26 26 26 0 0 00:09:09.649 asserts 115 115 115 0 n/a 00:09:09.649 00:09:09.649 Elapsed time = 0.007 seconds 00:09:11.070 ************************************ 00:09:11.070 END TEST accel_dif_functional_tests 00:09:11.070 ************************************ 00:09:11.070 00:09:11.070 real 0m2.333s 00:09:11.070 user 0m4.435s 00:09:11.070 sys 0m0.335s 00:09:11.070 18:16:23 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:11.070 18:16:23 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:09:11.070 18:16:23 accel -- common/autotest_common.sh@1142 -- # return 0 00:09:11.070 ************************************ 00:09:11.070 END TEST accel 00:09:11.070 00:09:11.070 real 1m4.707s 00:09:11.070 user 1m9.233s 00:09:11.070 sys 0m6.855s 00:09:11.070 18:16:23 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:11.070 18:16:23 accel -- common/autotest_common.sh@10 -- # set +x 00:09:11.070 ************************************ 00:09:11.328 18:16:23 -- common/autotest_common.sh@1142 -- # return 0 00:09:11.328 18:16:23 -- spdk/autotest.sh@184 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:09:11.328 18:16:23 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:11.328 18:16:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:11.328 18:16:23 -- common/autotest_common.sh@10 -- # set +x 00:09:11.328 ************************************ 00:09:11.328 START TEST accel_rpc 00:09:11.328 ************************************ 00:09:11.328 18:16:23 accel_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:09:11.328 * Looking for test storage... 00:09:11.328 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:09:11.328 18:16:23 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:09:11.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:11.328 18:16:23 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=66411 00:09:11.328 18:16:23 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 66411 00:09:11.328 18:16:23 accel_rpc -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:09:11.328 18:16:23 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 66411 ']' 00:09:11.328 18:16:23 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:11.328 18:16:23 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:11.328 18:16:23 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:11.328 18:16:23 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:11.328 18:16:23 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:11.587 [2024-07-22 18:16:23.361195] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:09:11.587 [2024-07-22 18:16:23.361671] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66411 ] 00:09:11.587 [2024-07-22 18:16:23.542472] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:11.847 [2024-07-22 18:16:23.825085] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:12.414 18:16:24 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:12.414 18:16:24 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:09:12.414 18:16:24 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:09:12.414 18:16:24 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:09:12.414 18:16:24 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:09:12.414 18:16:24 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:09:12.414 18:16:24 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:09:12.414 18:16:24 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:12.414 18:16:24 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:12.414 18:16:24 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:12.414 ************************************ 00:09:12.414 START TEST accel_assign_opcode 00:09:12.414 ************************************ 00:09:12.676 18:16:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:09:12.676 18:16:24 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:09:12.676 18:16:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:12.676 18:16:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:09:12.676 [2024-07-22 18:16:24.442477] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:09:12.676 18:16:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:12.676 18:16:24 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:09:12.676 18:16:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:12.676 18:16:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:09:12.676 [2024-07-22 18:16:24.450417] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:09:12.676 18:16:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:12.676 18:16:24 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:09:12.676 18:16:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:12.676 18:16:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:09:13.617 18:16:25 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:13.617 18:16:25 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:09:13.617 18:16:25 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:13.617 18:16:25 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:09:13.617 18:16:25 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:09:13.617 18:16:25 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:09:13.617 18:16:25 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:13.617 software 00:09:13.617 ************************************ 00:09:13.617 END TEST accel_assign_opcode 00:09:13.617 ************************************ 00:09:13.617 00:09:13.617 real 0m0.958s 00:09:13.617 user 0m0.054s 00:09:13.617 sys 0m0.012s 00:09:13.617 18:16:25 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:13.617 18:16:25 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:09:13.617 18:16:25 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:09:13.617 18:16:25 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 66411 00:09:13.617 18:16:25 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 66411 ']' 00:09:13.617 18:16:25 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 66411 00:09:13.617 18:16:25 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:09:13.617 18:16:25 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:13.617 18:16:25 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66411 00:09:13.618 killing process with pid 66411 00:09:13.618 18:16:25 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:13.618 18:16:25 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:13.618 18:16:25 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66411' 00:09:13.618 18:16:25 accel_rpc -- common/autotest_common.sh@967 -- # kill 66411 00:09:13.618 18:16:25 accel_rpc -- common/autotest_common.sh@972 -- # wait 66411 00:09:16.150 00:09:16.150 real 0m4.948s 00:09:16.150 user 0m4.893s 00:09:16.150 sys 0m0.749s 00:09:16.150 18:16:28 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:16.150 18:16:28 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:16.150 ************************************ 00:09:16.150 END TEST accel_rpc 00:09:16.150 ************************************ 00:09:16.150 18:16:28 -- common/autotest_common.sh@1142 -- # return 0 00:09:16.150 18:16:28 -- spdk/autotest.sh@185 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:09:16.150 18:16:28 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:16.150 18:16:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:16.150 18:16:28 -- common/autotest_common.sh@10 -- # set +x 00:09:16.150 ************************************ 00:09:16.150 START TEST app_cmdline 00:09:16.150 ************************************ 00:09:16.150 18:16:28 app_cmdline -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:09:16.408 * Looking for test storage... 00:09:16.408 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:09:16.408 18:16:28 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:09:16.408 18:16:28 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=66557 00:09:16.408 18:16:28 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 66557 00:09:16.408 18:16:28 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:09:16.408 18:16:28 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 66557 ']' 00:09:16.408 18:16:28 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:16.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:16.408 18:16:28 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:16.408 18:16:28 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:16.408 18:16:28 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:16.408 18:16:28 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:16.408 [2024-07-22 18:16:28.356786] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:09:16.408 [2024-07-22 18:16:28.358876] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66557 ] 00:09:16.666 [2024-07-22 18:16:28.552651] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:16.924 [2024-07-22 18:16:28.854110] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:17.860 18:16:29 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:17.860 18:16:29 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:09:17.860 18:16:29 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:09:18.117 { 00:09:18.117 "fields": { 00:09:18.117 "commit": "f7b31b2b9", 00:09:18.117 "major": 24, 00:09:18.117 "minor": 9, 00:09:18.117 "patch": 0, 00:09:18.117 "suffix": "-pre" 00:09:18.117 }, 00:09:18.117 "version": "SPDK v24.09-pre git sha1 f7b31b2b9" 00:09:18.117 } 00:09:18.117 18:16:30 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:09:18.117 18:16:30 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:09:18.117 18:16:30 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:09:18.117 18:16:30 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:09:18.117 18:16:30 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:09:18.117 18:16:30 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:09:18.117 18:16:30 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:18.117 18:16:30 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:18.117 18:16:30 app_cmdline -- app/cmdline.sh@26 -- # sort 00:09:18.117 18:16:30 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:18.117 18:16:30 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:09:18.117 18:16:30 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:09:18.117 18:16:30 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:18.117 18:16:30 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:09:18.117 18:16:30 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:18.117 18:16:30 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:18.117 18:16:30 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:18.117 18:16:30 app_cmdline -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:18.117 18:16:30 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:18.117 18:16:30 app_cmdline -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:18.117 18:16:30 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:18.117 18:16:30 app_cmdline -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:18.117 18:16:30 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:18.117 18:16:30 app_cmdline -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:18.375 2024/07/22 18:16:30 error on JSON-RPC call, method: env_dpdk_get_mem_stats, params: map[], err: error received for env_dpdk_get_mem_stats method, err: Code=-32601 Msg=Method not found 00:09:18.375 request: 00:09:18.375 { 00:09:18.375 "method": "env_dpdk_get_mem_stats", 00:09:18.375 "params": {} 00:09:18.375 } 00:09:18.375 Got JSON-RPC error response 00:09:18.375 GoRPCClient: error on JSON-RPC call 00:09:18.632 18:16:30 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:09:18.632 18:16:30 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:18.632 18:16:30 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:18.632 18:16:30 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:18.632 18:16:30 app_cmdline -- app/cmdline.sh@1 -- # killprocess 66557 00:09:18.632 18:16:30 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 66557 ']' 00:09:18.632 18:16:30 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 66557 00:09:18.632 18:16:30 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:09:18.632 18:16:30 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:18.632 18:16:30 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66557 00:09:18.633 18:16:30 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:18.633 18:16:30 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:18.633 killing process with pid 66557 00:09:18.633 18:16:30 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66557' 00:09:18.633 18:16:30 app_cmdline -- common/autotest_common.sh@967 -- # kill 66557 00:09:18.633 18:16:30 app_cmdline -- common/autotest_common.sh@972 -- # wait 66557 00:09:21.176 00:09:21.176 real 0m4.558s 00:09:21.176 user 0m4.977s 00:09:21.176 sys 0m0.705s 00:09:21.176 18:16:32 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:21.176 18:16:32 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:21.176 ************************************ 00:09:21.176 END TEST app_cmdline 00:09:21.176 ************************************ 00:09:21.176 18:16:32 -- common/autotest_common.sh@1142 -- # return 0 00:09:21.176 18:16:32 -- spdk/autotest.sh@186 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:09:21.176 18:16:32 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:21.176 18:16:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:21.176 18:16:32 -- common/autotest_common.sh@10 -- # set +x 00:09:21.176 ************************************ 00:09:21.176 START TEST version 00:09:21.176 ************************************ 00:09:21.176 18:16:32 version -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:09:21.176 * Looking for test storage... 00:09:21.176 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:09:21.176 18:16:32 version -- app/version.sh@17 -- # get_header_version major 00:09:21.176 18:16:32 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:09:21.176 18:16:32 version -- app/version.sh@14 -- # cut -f2 00:09:21.176 18:16:32 version -- app/version.sh@14 -- # tr -d '"' 00:09:21.176 18:16:32 version -- app/version.sh@17 -- # major=24 00:09:21.176 18:16:32 version -- app/version.sh@18 -- # get_header_version minor 00:09:21.176 18:16:32 version -- app/version.sh@14 -- # cut -f2 00:09:21.176 18:16:32 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:09:21.176 18:16:32 version -- app/version.sh@14 -- # tr -d '"' 00:09:21.176 18:16:32 version -- app/version.sh@18 -- # minor=9 00:09:21.176 18:16:32 version -- app/version.sh@19 -- # get_header_version patch 00:09:21.176 18:16:32 version -- app/version.sh@14 -- # cut -f2 00:09:21.176 18:16:32 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:09:21.176 18:16:32 version -- app/version.sh@14 -- # tr -d '"' 00:09:21.176 18:16:32 version -- app/version.sh@19 -- # patch=0 00:09:21.176 18:16:32 version -- app/version.sh@20 -- # get_header_version suffix 00:09:21.176 18:16:32 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:09:21.176 18:16:32 version -- app/version.sh@14 -- # cut -f2 00:09:21.176 18:16:32 version -- app/version.sh@14 -- # tr -d '"' 00:09:21.176 18:16:32 version -- app/version.sh@20 -- # suffix=-pre 00:09:21.176 18:16:32 version -- app/version.sh@22 -- # version=24.9 00:09:21.176 18:16:32 version -- app/version.sh@25 -- # (( patch != 0 )) 00:09:21.176 18:16:32 version -- app/version.sh@28 -- # version=24.9rc0 00:09:21.176 18:16:32 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:09:21.176 18:16:32 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:09:21.176 18:16:32 version -- app/version.sh@30 -- # py_version=24.9rc0 00:09:21.176 18:16:32 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:09:21.176 00:09:21.176 real 0m0.150s 00:09:21.176 user 0m0.085s 00:09:21.176 sys 0m0.096s 00:09:21.176 18:16:32 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:21.176 18:16:32 version -- common/autotest_common.sh@10 -- # set +x 00:09:21.176 ************************************ 00:09:21.176 END TEST version 00:09:21.176 ************************************ 00:09:21.176 18:16:32 -- common/autotest_common.sh@1142 -- # return 0 00:09:21.176 18:16:32 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:09:21.176 18:16:32 -- spdk/autotest.sh@198 -- # uname -s 00:09:21.176 18:16:32 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:09:21.176 18:16:32 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:09:21.176 18:16:32 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:09:21.176 18:16:32 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:09:21.176 18:16:32 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:09:21.176 18:16:32 -- spdk/autotest.sh@260 -- # timing_exit lib 00:09:21.176 18:16:32 -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:21.176 18:16:32 -- common/autotest_common.sh@10 -- # set +x 00:09:21.176 18:16:32 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:09:21.176 18:16:32 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:09:21.177 18:16:32 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:09:21.177 18:16:32 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:09:21.177 18:16:32 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:09:21.177 18:16:32 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:09:21.177 18:16:32 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:09:21.177 18:16:32 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:21.177 18:16:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:21.177 18:16:32 -- common/autotest_common.sh@10 -- # set +x 00:09:21.177 ************************************ 00:09:21.177 START TEST nvmf_tcp 00:09:21.177 ************************************ 00:09:21.177 18:16:32 nvmf_tcp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:09:21.177 * Looking for test storage... 00:09:21.177 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:09:21.177 18:16:33 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:09:21.177 18:16:33 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:09:21.177 18:16:33 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:09:21.177 18:16:33 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:21.177 18:16:33 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:21.177 18:16:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:21.177 ************************************ 00:09:21.177 START TEST nvmf_target_core 00:09:21.177 ************************************ 00:09:21.177 18:16:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:09:21.177 * Looking for test storage... 00:09:21.177 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:09:21.177 18:16:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:09:21.177 18:16:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:09:21.177 18:16:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:21.177 18:16:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:09:21.177 18:16:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:21.177 18:16:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:21.177 18:16:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:21.177 18:16:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:21.177 18:16:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:21.177 18:16:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:21.177 18:16:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:21.177 18:16:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:21.177 18:16:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:21.177 18:16:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:21.177 18:16:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:09:21.177 18:16:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=0b8484e2-e129-4a11-8748-0b3c728771da 00:09:21.177 18:16:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:21.177 18:16:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:21.177 18:16:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:21.177 18:16:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:21.177 18:16:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:21.177 18:16:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:21.177 18:16:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:21.177 18:16:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:21.177 18:16:33 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.177 18:16:33 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.177 18:16:33 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.177 18:16:33 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:09:21.177 18:16:33 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.177 18:16:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@47 -- # : 0 00:09:21.177 18:16:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:21.177 18:16:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:21.177 18:16:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:21.177 18:16:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:21.177 18:16:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:21.177 18:16:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:21.177 18:16:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:21.177 18:16:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:21.177 18:16:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:09:21.177 18:16:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:09:21.177 18:16:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:09:21.177 18:16:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:09:21.177 18:16:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:21.177 18:16:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:21.177 18:16:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:21.443 ************************************ 00:09:21.443 START TEST nvmf_abort 00:09:21.443 ************************************ 00:09:21.443 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:09:21.443 * Looking for test storage... 00:09:21.443 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:21.443 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:21.443 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:09:21.443 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:21.443 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:21.443 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:21.443 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:21.443 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:21.443 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:21.443 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:21.443 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:21.443 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:21.443 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:21.443 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:09:21.443 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=0b8484e2-e129-4a11-8748-0b3c728771da 00:09:21.443 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:21.443 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:21.443 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:21.443 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:21.443 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:21.443 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:21.443 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:21.443 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:21.443 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.444 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.444 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.444 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:09:21.444 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.444 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:09:21.444 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:21.444 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:21.444 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:21.444 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:21.444 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:21.444 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:21.444 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:21.444 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:21.444 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:21.444 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:09:21.444 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:09:21.444 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:21.444 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:21.444 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:21.444 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:21.444 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:21.444 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:21.444 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:21.444 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:21.444 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:21.444 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:21.444 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:21.444 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:21.444 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:21.444 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:21.444 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:21.444 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:21.444 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:21.444 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:21.444 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:21.444 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:21.444 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:21.444 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:21.444 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:21.444 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:21.444 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:21.444 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:21.444 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:21.444 Cannot find device "nvmf_init_br" 00:09:21.444 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@154 -- # true 00:09:21.444 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:21.444 Cannot find device "nvmf_tgt_br" 00:09:21.444 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@155 -- # true 00:09:21.444 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:21.444 Cannot find device "nvmf_tgt_br2" 00:09:21.444 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@156 -- # true 00:09:21.444 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:21.444 Cannot find device "nvmf_init_br" 00:09:21.444 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@157 -- # true 00:09:21.444 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:21.444 Cannot find device "nvmf_tgt_br" 00:09:21.444 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@158 -- # true 00:09:21.444 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:21.444 Cannot find device "nvmf_tgt_br2" 00:09:21.444 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@159 -- # true 00:09:21.444 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:21.444 Cannot find device "nvmf_br" 00:09:21.444 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@160 -- # true 00:09:21.444 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:21.444 Cannot find device "nvmf_init_if" 00:09:21.444 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@161 -- # true 00:09:21.444 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:21.444 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:21.444 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@162 -- # true 00:09:21.444 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:21.444 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:21.444 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@163 -- # true 00:09:21.444 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:21.444 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:21.702 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:21.702 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:21.702 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:21.702 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:21.702 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:21.702 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:21.702 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:21.702 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:21.702 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:21.702 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:21.702 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:21.702 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:21.702 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:21.702 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:21.702 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:21.702 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:21.702 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:21.702 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:21.702 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:21.702 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:21.702 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:21.960 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:21.960 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:21.960 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.124 ms 00:09:21.960 00:09:21.960 --- 10.0.0.2 ping statistics --- 00:09:21.960 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:21.960 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:09:21.960 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:21.960 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:21.960 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.069 ms 00:09:21.960 00:09:21.960 --- 10.0.0.3 ping statistics --- 00:09:21.960 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:21.960 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:09:21.960 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:21.960 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:21.960 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:09:21.960 00:09:21.960 --- 10.0.0.1 ping statistics --- 00:09:21.960 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:21.960 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:09:21.960 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:21.960 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@433 -- # return 0 00:09:21.960 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:21.960 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:21.960 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:21.961 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:21.961 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:21.961 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:21.961 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:21.961 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:09:21.961 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:21.961 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:21.961 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:21.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:21.961 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=66955 00:09:21.961 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 66955 00:09:21.961 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@829 -- # '[' -z 66955 ']' 00:09:21.961 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:21.961 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:21.961 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:21.961 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:21.961 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:21.961 18:16:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:21.961 [2024-07-22 18:16:33.890361] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:09:21.961 [2024-07-22 18:16:33.890608] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:22.219 [2024-07-22 18:16:34.074072] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:22.476 [2024-07-22 18:16:34.402697] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:22.476 [2024-07-22 18:16:34.402789] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:22.476 [2024-07-22 18:16:34.402807] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:22.476 [2024-07-22 18:16:34.402823] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:22.476 [2024-07-22 18:16:34.402859] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:22.476 [2024-07-22 18:16:34.403055] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:22.476 [2024-07-22 18:16:34.403180] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:22.476 [2024-07-22 18:16:34.403198] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:23.041 18:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:23.041 18:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@862 -- # return 0 00:09:23.041 18:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:23.041 18:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:23.041 18:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:23.041 18:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:23.041 18:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:09:23.041 18:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:23.041 18:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:23.041 [2024-07-22 18:16:34.907535] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:23.041 18:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:23.041 18:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:09:23.041 18:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:23.041 18:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:23.041 Malloc0 00:09:23.041 18:16:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:23.041 18:16:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:23.041 18:16:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:23.041 18:16:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:23.041 Delay0 00:09:23.041 18:16:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:23.041 18:16:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:23.041 18:16:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:23.041 18:16:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:23.041 18:16:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:23.041 18:16:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:09:23.041 18:16:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:23.041 18:16:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:23.041 18:16:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:23.041 18:16:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:23.041 18:16:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:23.041 18:16:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:23.041 [2024-07-22 18:16:35.039892] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:23.041 18:16:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:23.041 18:16:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:23.041 18:16:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:23.041 18:16:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:23.041 18:16:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:23.041 18:16:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:09:23.299 [2024-07-22 18:16:35.293487] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:25.829 Initializing NVMe Controllers 00:09:25.829 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:09:25.829 controller IO queue size 128 less than required 00:09:25.829 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:09:25.829 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:09:25.829 Initialization complete. Launching workers. 00:09:25.830 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 125, failed: 25942 00:09:25.830 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 26001, failed to submit 66 00:09:25.830 success 25942, unsuccess 59, failed 0 00:09:25.830 18:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:25.830 18:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:25.830 18:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:25.830 18:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:25.830 18:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:09:25.830 18:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:09:25.830 18:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:25.830 18:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:09:25.830 18:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:25.830 18:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:09:25.830 18:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:25.830 18:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:25.830 rmmod nvme_tcp 00:09:25.830 rmmod nvme_fabrics 00:09:25.830 rmmod nvme_keyring 00:09:25.830 18:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:25.830 18:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:09:25.830 18:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:09:25.830 18:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 66955 ']' 00:09:25.830 18:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 66955 00:09:25.830 18:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@948 -- # '[' -z 66955 ']' 00:09:25.830 18:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@952 -- # kill -0 66955 00:09:25.830 18:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@953 -- # uname 00:09:25.830 18:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:25.830 18:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66955 00:09:25.830 killing process with pid 66955 00:09:25.830 18:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:25.830 18:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:25.830 18:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66955' 00:09:25.830 18:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@967 -- # kill 66955 00:09:25.830 18:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # wait 66955 00:09:27.205 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:27.205 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:27.205 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:27.205 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:27.205 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:27.205 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:27.205 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:27.205 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:27.205 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:27.205 00:09:27.206 real 0m5.835s 00:09:27.206 user 0m15.040s 00:09:27.206 sys 0m1.324s 00:09:27.206 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:27.206 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:27.206 ************************************ 00:09:27.206 END TEST nvmf_abort 00:09:27.206 ************************************ 00:09:27.206 18:16:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:09:27.206 18:16:39 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:09:27.206 18:16:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:27.206 18:16:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:27.206 18:16:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:27.206 ************************************ 00:09:27.206 START TEST nvmf_ns_hotplug_stress 00:09:27.206 ************************************ 00:09:27.206 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:09:27.206 * Looking for test storage... 00:09:27.206 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:27.206 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:27.206 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:09:27.206 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:27.206 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:27.206 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:27.206 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:27.206 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:27.206 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:27.206 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:27.206 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:27.206 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:27.206 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:27.206 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:09:27.206 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=0b8484e2-e129-4a11-8748-0b3c728771da 00:09:27.206 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:27.206 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:27.206 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:27.206 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:27.206 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:27.206 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:27.206 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:27.206 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:27.206 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.206 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.206 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.206 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:09:27.206 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.206 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:09:27.206 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:27.206 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:27.206 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:27.206 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:27.206 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:27.206 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:27.206 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:27.206 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:27.206 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:27.206 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:09:27.206 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:27.206 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:27.206 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:27.206 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:27.206 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:27.206 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:27.206 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:27.206 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:27.206 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:27.206 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:27.206 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:27.206 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:27.206 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:27.206 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:27.206 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:27.206 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:27.206 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:27.206 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:27.206 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:27.206 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:27.206 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:27.206 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:27.206 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:27.206 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:27.206 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:27.207 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:27.207 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:27.466 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:27.466 Cannot find device "nvmf_tgt_br" 00:09:27.466 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@155 -- # true 00:09:27.466 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:27.466 Cannot find device "nvmf_tgt_br2" 00:09:27.466 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@156 -- # true 00:09:27.466 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:27.466 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:27.466 Cannot find device "nvmf_tgt_br" 00:09:27.466 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@158 -- # true 00:09:27.466 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:27.466 Cannot find device "nvmf_tgt_br2" 00:09:27.466 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@159 -- # true 00:09:27.466 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:27.466 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:27.466 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:27.466 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:27.466 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # true 00:09:27.466 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:27.466 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:27.466 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # true 00:09:27.466 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:27.466 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:27.466 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:27.466 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:27.466 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:27.466 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:27.466 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:27.466 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:27.466 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:27.466 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:27.466 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:27.466 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:27.466 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:27.466 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:27.466 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:27.466 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:27.726 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:27.726 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:27.726 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:27.726 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:27.726 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:27.726 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:27.726 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:27.726 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:27.726 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:27.726 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.086 ms 00:09:27.726 00:09:27.726 --- 10.0.0.2 ping statistics --- 00:09:27.726 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:27.726 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:09:27.726 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:27.726 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:27.726 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:09:27.726 00:09:27.726 --- 10.0.0.3 ping statistics --- 00:09:27.726 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:27.726 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:09:27.726 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:27.726 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:27.726 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:09:27.726 00:09:27.726 --- 10.0.0.1 ping statistics --- 00:09:27.726 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:27.726 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:09:27.726 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:27.726 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@433 -- # return 0 00:09:27.726 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:27.726 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:27.726 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:27.726 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:27.726 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:27.726 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:27.726 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:27.726 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:09:27.726 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:27.726 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:27.726 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:27.726 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=67249 00:09:27.726 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:27.726 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 67249 00:09:27.726 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@829 -- # '[' -z 67249 ']' 00:09:27.726 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:27.726 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:27.726 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:27.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:27.726 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:27.726 18:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:27.726 [2024-07-22 18:16:39.730940] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:09:27.726 [2024-07-22 18:16:39.731133] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:27.985 [2024-07-22 18:16:39.908994] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:28.244 [2024-07-22 18:16:40.201255] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:28.244 [2024-07-22 18:16:40.201337] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:28.244 [2024-07-22 18:16:40.201356] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:28.244 [2024-07-22 18:16:40.201372] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:28.244 [2024-07-22 18:16:40.201385] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:28.244 [2024-07-22 18:16:40.201675] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:28.244 [2024-07-22 18:16:40.201818] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:28.244 [2024-07-22 18:16:40.201877] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:28.840 18:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:28.840 18:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # return 0 00:09:28.840 18:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:28.840 18:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:28.840 18:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:28.840 18:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:28.840 18:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:09:28.840 18:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:29.098 [2024-07-22 18:16:40.986455] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:29.098 18:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:29.357 18:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:29.616 [2024-07-22 18:16:41.608186] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:29.875 18:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:30.134 18:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:09:30.393 Malloc0 00:09:30.393 18:16:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:30.651 Delay0 00:09:30.651 18:16:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:30.910 18:16:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:09:31.169 NULL1 00:09:31.169 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:09:31.441 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=67392 00:09:31.441 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:09:31.441 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 67392 00:09:31.441 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:31.701 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:31.960 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:09:31.960 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:09:32.219 true 00:09:32.219 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 67392 00:09:32.219 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:32.477 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:32.736 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:09:32.736 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:09:32.995 true 00:09:32.995 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 67392 00:09:32.995 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:33.253 18:16:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:33.512 18:16:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:09:33.512 18:16:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:09:33.770 true 00:09:33.770 18:16:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 67392 00:09:33.770 18:16:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:34.336 18:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:34.594 18:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:09:34.594 18:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:09:34.852 true 00:09:34.852 18:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 67392 00:09:34.852 18:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:35.111 18:16:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:35.369 18:16:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:09:35.370 18:16:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:09:35.628 true 00:09:35.628 18:16:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 67392 00:09:35.628 18:16:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:36.263 18:16:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:36.550 18:16:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:09:36.550 18:16:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:09:36.550 true 00:09:36.808 18:16:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 67392 00:09:36.808 18:16:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:37.066 18:16:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:37.324 18:16:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:09:37.325 18:16:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:09:37.584 true 00:09:37.584 18:16:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 67392 00:09:37.584 18:16:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:37.843 18:16:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:38.103 18:16:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:09:38.103 18:16:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:09:38.362 true 00:09:38.362 18:16:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 67392 00:09:38.362 18:16:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:38.621 18:16:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:38.879 18:16:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:09:38.880 18:16:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:09:39.138 true 00:09:39.138 18:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 67392 00:09:39.138 18:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:39.398 18:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:39.657 18:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:09:39.657 18:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:09:39.915 true 00:09:39.915 18:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 67392 00:09:39.915 18:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:40.502 18:16:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:40.760 18:16:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:09:40.760 18:16:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:09:41.018 true 00:09:41.018 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 67392 00:09:41.018 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:41.585 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:41.844 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:09:41.844 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:09:42.102 true 00:09:42.102 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 67392 00:09:42.102 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:42.361 18:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:42.621 18:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:09:42.621 18:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:09:42.934 true 00:09:42.934 18:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 67392 00:09:42.934 18:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:43.193 18:16:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:43.452 18:16:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:09:43.452 18:16:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:09:43.710 true 00:09:43.710 18:16:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 67392 00:09:43.710 18:16:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:43.968 18:16:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:44.533 18:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:09:44.533 18:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:09:44.533 true 00:09:44.533 18:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 67392 00:09:44.533 18:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:44.791 18:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:45.049 18:16:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:09:45.049 18:16:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:09:45.308 true 00:09:45.308 18:16:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 67392 00:09:45.308 18:16:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:45.566 18:16:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:46.131 18:16:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:09:46.131 18:16:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:09:46.417 true 00:09:46.417 18:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 67392 00:09:46.417 18:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:46.682 18:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:46.940 18:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:09:46.940 18:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:09:47.198 true 00:09:47.198 18:16:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 67392 00:09:47.198 18:16:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:47.456 18:16:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:47.714 18:16:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:09:47.714 18:16:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:09:47.972 true 00:09:47.972 18:16:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 67392 00:09:47.972 18:16:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:48.231 18:17:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:48.496 18:17:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:09:48.496 18:17:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:09:48.754 true 00:09:48.754 18:17:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 67392 00:09:48.754 18:17:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:49.013 18:17:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:49.271 18:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:09:49.271 18:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:09:49.530 true 00:09:49.530 18:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 67392 00:09:49.530 18:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:49.788 18:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:50.046 18:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:09:50.046 18:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:09:50.305 true 00:09:50.305 18:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 67392 00:09:50.305 18:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:50.872 18:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:50.872 18:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:09:50.872 18:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:09:51.130 true 00:09:51.130 18:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 67392 00:09:51.130 18:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:51.389 18:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:51.648 18:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:09:51.648 18:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:09:51.906 true 00:09:51.906 18:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 67392 00:09:51.906 18:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:52.164 18:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:52.422 18:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:09:52.422 18:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:09:52.680 true 00:09:52.680 18:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 67392 00:09:52.680 18:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:52.938 18:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:53.505 18:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:09:53.505 18:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:09:53.505 true 00:09:53.505 18:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 67392 00:09:53.505 18:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:53.763 18:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:54.025 18:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:09:54.025 18:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:09:54.592 true 00:09:54.592 18:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 67392 00:09:54.592 18:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:54.851 18:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:55.109 18:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:09:55.109 18:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:09:55.366 true 00:09:55.366 18:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 67392 00:09:55.366 18:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:55.624 18:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:55.881 18:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:09:55.881 18:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:09:56.446 true 00:09:56.446 18:17:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 67392 00:09:56.446 18:17:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:56.706 18:17:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:56.966 18:17:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:09:56.966 18:17:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:09:57.224 true 00:09:57.224 18:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 67392 00:09:57.224 18:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:57.482 18:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:58.048 18:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:09:58.048 18:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:09:58.048 true 00:09:58.048 18:17:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 67392 00:09:58.048 18:17:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:58.340 18:17:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:58.599 18:17:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:09:58.599 18:17:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:09:58.877 true 00:09:58.877 18:17:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 67392 00:09:58.877 18:17:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:59.151 18:17:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:59.408 18:17:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:09:59.408 18:17:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:09:59.666 true 00:09:59.666 18:17:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 67392 00:09:59.666 18:17:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:59.924 18:17:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:00.182 18:17:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:10:00.182 18:17:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:10:00.440 true 00:10:00.440 18:17:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 67392 00:10:00.440 18:17:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:01.008 18:17:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:01.266 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:10:01.266 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:10:01.524 true 00:10:01.524 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 67392 00:10:01.524 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:01.783 Initializing NVMe Controllers 00:10:01.783 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:01.783 Controller IO queue size 128, less than required. 00:10:01.783 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:01.783 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:10:01.783 Initialization complete. Launching workers. 00:10:01.783 ======================================================== 00:10:01.783 Latency(us) 00:10:01.783 Device Information : IOPS MiB/s Average min max 00:10:01.783 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 14101.10 6.89 9077.31 4983.05 22270.81 00:10:01.783 ======================================================== 00:10:01.783 Total : 14101.10 6.89 9077.31 4983.05 22270.81 00:10:01.783 00:10:01.783 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:02.042 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:10:02.042 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:10:02.300 true 00:10:02.300 18:17:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 67392 00:10:02.300 /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (67392) - No such process 00:10:02.300 18:17:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 67392 00:10:02.300 18:17:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:02.560 18:17:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:02.818 18:17:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:10:02.818 18:17:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:10:02.818 18:17:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:10:02.818 18:17:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:02.818 18:17:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:10:03.076 null0 00:10:03.076 18:17:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:03.076 18:17:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:03.076 18:17:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:10:03.335 null1 00:10:03.335 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:03.335 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:03.335 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:10:03.593 null2 00:10:03.593 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:03.593 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:03.593 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:10:03.851 null3 00:10:03.851 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:03.851 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:03.851 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:10:04.109 null4 00:10:04.109 18:17:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:04.109 18:17:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:04.109 18:17:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:10:04.367 null5 00:10:04.367 18:17:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:04.368 18:17:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:04.368 18:17:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:10:04.626 null6 00:10:04.626 18:17:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:04.626 18:17:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:04.626 18:17:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:10:04.886 null7 00:10:04.886 18:17:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:04.886 18:17:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:04.886 18:17:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:10:04.886 18:17:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:04.886 18:17:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:04.886 18:17:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:04.886 18:17:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:04.886 18:17:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:10:04.886 18:17:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:10:04.886 18:17:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:04.886 18:17:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:04.886 18:17:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:04.886 18:17:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:10:04.886 18:17:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:10:04.886 18:17:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:04.886 18:17:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:04.886 18:17:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:04.886 18:17:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:04.886 18:17:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:04.886 18:17:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:04.886 18:17:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:04.886 18:17:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:10:04.886 18:17:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:04.886 18:17:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:10:04.886 18:17:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:04.886 18:17:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:04.886 18:17:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:04.886 18:17:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:04.886 18:17:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:04.886 18:17:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:10:04.886 18:17:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:10:04.886 18:17:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:04.886 18:17:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:04.886 18:17:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:04.886 18:17:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:04.886 18:17:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:04.886 18:17:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:04.886 18:17:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:04.886 18:17:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:10:04.886 18:17:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:04.886 18:17:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:10:04.886 18:17:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:04.886 18:17:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:04.886 18:17:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:04.886 18:17:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:04.886 18:17:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:10:04.886 18:17:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:04.886 18:17:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:10:04.886 18:17:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:04.886 18:17:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:04.886 18:17:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:04.886 18:17:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:04.887 18:17:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:04.887 18:17:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:10:04.887 18:17:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:10:04.887 18:17:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:04.887 18:17:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:04.887 18:17:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:04.887 18:17:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:04.887 18:17:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:04.887 18:17:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:04.887 18:17:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:10:04.887 18:17:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:10:04.887 18:17:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:04.887 18:17:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:04.887 18:17:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:04.887 18:17:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:04.887 18:17:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 68577 68578 68581 68583 68585 68586 68588 68591 00:10:04.887 18:17:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:05.145 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:05.145 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:05.145 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:05.145 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:05.403 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:05.403 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:05.403 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:05.403 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:05.403 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:05.403 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:05.403 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:05.403 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:05.403 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:05.403 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:05.403 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:05.403 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:05.403 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:05.403 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:05.403 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:05.404 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:05.678 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:05.678 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:05.678 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:05.678 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:05.678 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:05.678 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:05.678 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:05.678 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:05.678 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:05.678 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:05.678 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:05.678 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:05.678 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:05.678 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:05.678 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:05.936 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:05.936 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:05.936 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:05.936 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:05.936 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:05.936 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:05.936 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:05.936 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:06.194 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:06.194 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:06.194 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:06.194 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:06.194 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:06.194 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:06.194 18:17:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:06.194 18:17:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:06.194 18:17:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:06.194 18:17:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:06.194 18:17:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:06.194 18:17:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:06.194 18:17:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:06.194 18:17:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:06.194 18:17:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:06.194 18:17:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:06.194 18:17:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:06.194 18:17:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:06.194 18:17:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:06.194 18:17:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:06.194 18:17:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:06.194 18:17:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:06.452 18:17:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:06.452 18:17:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:06.453 18:17:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:06.453 18:17:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:06.453 18:17:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:06.453 18:17:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:06.453 18:17:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:06.711 18:17:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:06.711 18:17:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:06.711 18:17:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:06.711 18:17:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:06.711 18:17:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:06.711 18:17:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:06.711 18:17:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:06.711 18:17:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:06.711 18:17:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:06.711 18:17:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:06.711 18:17:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:06.711 18:17:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:06.711 18:17:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:06.711 18:17:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:06.711 18:17:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:06.711 18:17:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:06.711 18:17:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:06.711 18:17:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:06.711 18:17:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:06.711 18:17:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:06.711 18:17:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:06.711 18:17:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:06.711 18:17:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:06.711 18:17:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:06.968 18:17:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:06.968 18:17:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:06.968 18:17:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:06.968 18:17:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:06.969 18:17:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:06.969 18:17:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:06.969 18:17:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:06.969 18:17:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:07.227 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:07.227 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:07.227 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:07.227 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:07.227 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:07.227 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:07.227 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:07.227 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:07.227 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:07.227 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:07.227 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:07.227 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:07.227 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:07.227 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:07.227 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:07.227 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:07.227 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:07.227 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:07.485 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:07.485 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:07.485 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:07.485 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:07.485 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:07.485 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:07.485 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:07.485 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:07.485 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:07.485 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:07.485 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:07.485 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:07.743 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:07.743 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:07.743 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:07.743 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:07.743 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:07.743 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:07.743 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:07.743 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:07.743 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:07.743 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:07.743 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:07.743 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:07.743 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:07.743 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:07.743 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:07.743 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:07.743 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:08.002 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:08.002 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:08.002 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:08.002 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:08.002 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:08.002 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:08.002 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:08.002 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:08.002 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:08.002 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:08.002 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:08.002 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:08.002 18:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:08.002 18:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:08.270 18:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:08.270 18:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:08.270 18:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:08.270 18:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:08.270 18:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:08.270 18:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:08.270 18:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:08.270 18:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:08.270 18:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:08.270 18:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:08.270 18:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:08.270 18:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:08.529 18:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:08.529 18:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:08.529 18:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:08.529 18:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:08.529 18:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:08.529 18:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:08.529 18:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:08.529 18:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:08.529 18:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:08.529 18:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:08.529 18:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:08.529 18:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:08.529 18:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:08.529 18:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:08.529 18:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:08.529 18:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:08.529 18:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:08.529 18:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:08.787 18:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:08.787 18:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:08.787 18:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:08.787 18:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:08.787 18:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:08.787 18:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:08.787 18:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:08.787 18:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:08.787 18:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:08.787 18:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:08.787 18:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:08.787 18:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:08.787 18:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:08.787 18:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:09.046 18:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:09.046 18:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:09.046 18:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:09.046 18:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:09.046 18:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:09.046 18:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:09.046 18:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:09.046 18:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:09.046 18:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:09.046 18:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:09.046 18:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:09.046 18:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:09.046 18:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:09.046 18:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:09.046 18:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:09.046 18:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:09.046 18:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:09.046 18:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:09.306 18:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:09.306 18:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:09.306 18:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:09.306 18:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:09.306 18:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:09.306 18:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:09.306 18:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:09.306 18:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:09.306 18:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:09.306 18:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:09.306 18:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:09.306 18:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:09.306 18:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:09.567 18:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:09.567 18:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:09.567 18:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:09.567 18:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:09.567 18:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:09.567 18:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:09.567 18:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:09.567 18:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:09.567 18:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:09.567 18:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:09.567 18:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:09.567 18:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:09.567 18:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:09.567 18:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:09.567 18:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:09.567 18:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:09.824 18:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:09.824 18:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:09.824 18:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:09.824 18:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:09.824 18:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:09.824 18:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:09.824 18:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:09.825 18:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:09.825 18:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:09.825 18:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:09.825 18:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:09.825 18:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:09.825 18:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:10.082 18:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:10.082 18:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:10.082 18:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:10.082 18:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:10.082 18:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:10.082 18:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:10.082 18:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:10.082 18:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:10.082 18:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:10.082 18:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:10.082 18:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:10.082 18:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:10.082 18:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:10.082 18:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:10.339 18:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:10.339 18:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:10.339 18:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:10.339 18:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:10.339 18:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:10.339 18:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:10.339 18:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:10.339 18:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:10.339 18:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:10.339 18:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:10.339 18:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:10.339 18:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:10.597 18:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:10.597 18:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:10.598 18:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:10.598 18:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:10.598 18:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:10.598 18:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:10.598 18:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:10.598 18:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:10.598 18:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:10.598 18:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:10.598 18:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:10.855 18:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:10.855 18:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:10.855 18:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:10.855 18:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:10.855 18:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:10.855 18:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:10.855 18:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:10:10.855 18:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:10:10.855 18:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:10.855 18:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:10:10.855 18:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:10.855 18:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:10:10.855 18:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:10.855 18:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:10.855 rmmod nvme_tcp 00:10:10.855 rmmod nvme_fabrics 00:10:10.855 rmmod nvme_keyring 00:10:10.855 18:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:10.855 18:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:10:10.855 18:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:10:10.855 18:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 67249 ']' 00:10:10.855 18:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 67249 00:10:10.855 18:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # '[' -z 67249 ']' 00:10:10.855 18:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # kill -0 67249 00:10:10.855 18:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # uname 00:10:10.856 18:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:10.856 18:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 67249 00:10:10.856 18:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:10.856 18:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:10.856 killing process with pid 67249 00:10:10.856 18:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 67249' 00:10:10.856 18:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # kill 67249 00:10:10.856 18:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # wait 67249 00:10:12.757 18:17:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:12.757 18:17:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:12.758 18:17:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:12.758 18:17:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:12.758 18:17:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:12.758 18:17:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:12.758 18:17:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:12.758 18:17:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:12.758 18:17:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:12.758 00:10:12.758 real 0m45.250s 00:10:12.758 user 3m37.752s 00:10:12.758 sys 0m14.941s 00:10:12.758 18:17:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:12.758 18:17:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:12.758 ************************************ 00:10:12.758 END TEST nvmf_ns_hotplug_stress 00:10:12.758 ************************************ 00:10:12.758 18:17:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:10:12.758 18:17:24 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:10:12.758 18:17:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:12.758 18:17:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:12.758 18:17:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:12.758 ************************************ 00:10:12.758 START TEST nvmf_delete_subsystem 00:10:12.758 ************************************ 00:10:12.758 18:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:10:12.758 * Looking for test storage... 00:10:12.758 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:12.758 18:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:12.758 18:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:10:12.758 18:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:12.758 18:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:12.758 18:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:12.758 18:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:12.758 18:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:12.758 18:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:12.758 18:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:12.758 18:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:12.758 18:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:12.758 18:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:12.758 18:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:10:12.758 18:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=0b8484e2-e129-4a11-8748-0b3c728771da 00:10:12.758 18:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:12.758 18:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:12.758 18:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:12.758 18:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:12.758 18:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:12.758 18:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:12.758 18:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:12.758 18:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:12.758 18:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.758 18:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.758 18:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.758 18:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:10:12.758 18:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.758 18:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:10:12.758 18:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:12.758 18:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:12.758 18:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:12.758 18:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:12.758 18:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:12.758 18:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:12.758 18:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:12.758 18:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:12.758 18:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:10:12.758 18:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:12.758 18:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:12.758 18:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:12.758 18:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:12.758 18:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:12.758 18:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:12.758 18:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:12.758 18:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:12.758 18:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:12.758 18:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:12.758 18:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:12.758 18:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:12.758 18:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:12.758 18:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:12.758 18:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:12.758 18:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:12.758 18:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:12.758 18:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:12.758 18:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:12.758 18:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:12.758 18:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:12.758 18:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:12.758 18:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:12.758 18:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:12.758 18:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:12.758 18:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:12.758 18:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:12.758 18:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:12.758 Cannot find device "nvmf_tgt_br" 00:10:12.758 18:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@155 -- # true 00:10:12.758 18:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:12.758 Cannot find device "nvmf_tgt_br2" 00:10:12.758 18:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@156 -- # true 00:10:12.758 18:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:12.758 18:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:12.758 Cannot find device "nvmf_tgt_br" 00:10:12.758 18:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@158 -- # true 00:10:12.758 18:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:12.758 Cannot find device "nvmf_tgt_br2" 00:10:12.758 18:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@159 -- # true 00:10:12.758 18:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:12.758 18:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:12.758 18:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:12.758 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:12.758 18:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # true 00:10:12.758 18:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:12.758 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:12.758 18:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # true 00:10:12.758 18:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:12.758 18:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:12.758 18:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:12.758 18:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:12.758 18:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:12.758 18:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:12.758 18:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:12.758 18:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:12.758 18:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:12.758 18:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:12.758 18:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:12.758 18:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:12.758 18:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:12.758 18:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:13.017 18:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:13.017 18:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:13.017 18:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:13.017 18:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:13.017 18:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:13.017 18:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:13.017 18:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:13.017 18:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:13.017 18:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:13.017 18:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:13.017 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:13.017 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:10:13.017 00:10:13.017 --- 10.0.0.2 ping statistics --- 00:10:13.017 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:13.017 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:10:13.017 18:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:13.017 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:13.017 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.036 ms 00:10:13.017 00:10:13.017 --- 10.0.0.3 ping statistics --- 00:10:13.017 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:13.017 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:10:13.017 18:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:13.017 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:13.017 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:10:13.017 00:10:13.017 --- 10.0.0.1 ping statistics --- 00:10:13.017 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:13.017 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:10:13.017 18:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:13.017 18:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@433 -- # return 0 00:10:13.017 18:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:13.017 18:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:13.017 18:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:13.017 18:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:13.017 18:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:13.017 18:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:13.017 18:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:13.017 18:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:10:13.017 18:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:13.017 18:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:13.017 18:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:13.017 18:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=69913 00:10:13.017 18:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 69913 00:10:13.017 18:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:10:13.017 18:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@829 -- # '[' -z 69913 ']' 00:10:13.017 18:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:13.017 18:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:13.017 18:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:13.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:13.017 18:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:13.017 18:17:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:13.275 [2024-07-22 18:17:25.044782] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:10:13.275 [2024-07-22 18:17:25.044986] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:13.275 [2024-07-22 18:17:25.227262] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:13.533 [2024-07-22 18:17:25.523792] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:13.533 [2024-07-22 18:17:25.523907] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:13.533 [2024-07-22 18:17:25.523930] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:13.533 [2024-07-22 18:17:25.523948] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:13.533 [2024-07-22 18:17:25.523962] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:13.533 [2024-07-22 18:17:25.524193] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:13.533 [2024-07-22 18:17:25.524583] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:14.099 18:17:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:14.099 18:17:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # return 0 00:10:14.099 18:17:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:14.099 18:17:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:14.099 18:17:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:14.099 18:17:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:14.099 18:17:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:14.099 18:17:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:14.099 18:17:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:14.099 [2024-07-22 18:17:26.007980] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:14.099 18:17:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:14.099 18:17:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:14.099 18:17:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:14.099 18:17:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:14.099 18:17:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:14.100 18:17:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:14.100 18:17:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:14.100 18:17:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:14.100 [2024-07-22 18:17:26.026411] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:14.100 18:17:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:14.100 18:17:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:10:14.100 18:17:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:14.100 18:17:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:14.100 NULL1 00:10:14.100 18:17:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:14.100 18:17:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:14.100 18:17:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:14.100 18:17:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:14.100 Delay0 00:10:14.100 18:17:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:14.100 18:17:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:14.100 18:17:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:14.100 18:17:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:14.100 18:17:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:14.100 18:17:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=69970 00:10:14.100 18:17:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:10:14.100 18:17:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:10:14.357 [2024-07-22 18:17:26.300601] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:10:16.309 18:17:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:16.309 18:17:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:16.309 18:17:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:16.569 Read completed with error (sct=0, sc=8) 00:10:16.569 Write completed with error (sct=0, sc=8) 00:10:16.569 starting I/O failed: -6 00:10:16.569 Read completed with error (sct=0, sc=8) 00:10:16.569 Read completed with error (sct=0, sc=8) 00:10:16.569 Read completed with error (sct=0, sc=8) 00:10:16.569 Read completed with error (sct=0, sc=8) 00:10:16.569 starting I/O failed: -6 00:10:16.569 Read completed with error (sct=0, sc=8) 00:10:16.569 Read completed with error (sct=0, sc=8) 00:10:16.569 Read completed with error (sct=0, sc=8) 00:10:16.569 Read completed with error (sct=0, sc=8) 00:10:16.569 starting I/O failed: -6 00:10:16.569 Read completed with error (sct=0, sc=8) 00:10:16.569 Read completed with error (sct=0, sc=8) 00:10:16.569 Read completed with error (sct=0, sc=8) 00:10:16.569 Read completed with error (sct=0, sc=8) 00:10:16.569 starting I/O failed: -6 00:10:16.569 Read completed with error (sct=0, sc=8) 00:10:16.569 Read completed with error (sct=0, sc=8) 00:10:16.569 Read completed with error (sct=0, sc=8) 00:10:16.569 Read completed with error (sct=0, sc=8) 00:10:16.569 starting I/O failed: -6 00:10:16.569 Read completed with error (sct=0, sc=8) 00:10:16.569 Read completed with error (sct=0, sc=8) 00:10:16.569 Write completed with error (sct=0, sc=8) 00:10:16.569 Write completed with error (sct=0, sc=8) 00:10:16.569 starting I/O failed: -6 00:10:16.569 Write completed with error (sct=0, sc=8) 00:10:16.569 Write completed with error (sct=0, sc=8) 00:10:16.569 Read completed with error (sct=0, sc=8) 00:10:16.569 Read completed with error (sct=0, sc=8) 00:10:16.569 starting I/O failed: -6 00:10:16.569 Read completed with error (sct=0, sc=8) 00:10:16.569 Read completed with error (sct=0, sc=8) 00:10:16.569 Read completed with error (sct=0, sc=8) 00:10:16.569 Read completed with error (sct=0, sc=8) 00:10:16.569 starting I/O failed: -6 00:10:16.569 Write completed with error (sct=0, sc=8) 00:10:16.569 Read completed with error (sct=0, sc=8) 00:10:16.569 Write completed with error (sct=0, sc=8) 00:10:16.569 Read completed with error (sct=0, sc=8) 00:10:16.569 starting I/O failed: -6 00:10:16.569 Read completed with error (sct=0, sc=8) 00:10:16.569 Read completed with error (sct=0, sc=8) 00:10:16.569 Read completed with error (sct=0, sc=8) 00:10:16.569 Write completed with error (sct=0, sc=8) 00:10:16.569 starting I/O failed: -6 00:10:16.569 Read completed with error (sct=0, sc=8) 00:10:16.569 Write completed with error (sct=0, sc=8) 00:10:16.569 Read completed with error (sct=0, sc=8) 00:10:16.569 Read completed with error (sct=0, sc=8) 00:10:16.569 starting I/O failed: -6 00:10:16.569 Write completed with error (sct=0, sc=8) 00:10:16.569 Write completed with error (sct=0, sc=8) 00:10:16.569 Read completed with error (sct=0, sc=8) 00:10:16.569 Read completed with error (sct=0, sc=8) 00:10:16.569 starting I/O failed: -6 00:10:16.569 Read completed with error (sct=0, sc=8) 00:10:16.569 Write completed with error (sct=0, sc=8) 00:10:16.569 [2024-07-22 18:17:28.364855] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500000f800 is same with the state(5) to be set 00:10:16.569 Read completed with error (sct=0, sc=8) 00:10:16.569 Write completed with error (sct=0, sc=8) 00:10:16.569 Read completed with error (sct=0, sc=8) 00:10:16.569 Read completed with error (sct=0, sc=8) 00:10:16.569 starting I/O failed: -6 00:10:16.569 Write completed with error (sct=0, sc=8) 00:10:16.569 Read completed with error (sct=0, sc=8) 00:10:16.569 Read completed with error (sct=0, sc=8) 00:10:16.569 Read completed with error (sct=0, sc=8) 00:10:16.569 Read completed with error (sct=0, sc=8) 00:10:16.569 Read completed with error (sct=0, sc=8) 00:10:16.569 Read completed with error (sct=0, sc=8) 00:10:16.569 Read completed with error (sct=0, sc=8) 00:10:16.569 Write completed with error (sct=0, sc=8) 00:10:16.569 Read completed with error (sct=0, sc=8) 00:10:16.569 Read completed with error (sct=0, sc=8) 00:10:16.569 Write completed with error (sct=0, sc=8) 00:10:16.569 starting I/O failed: -6 00:10:16.569 Read completed with error (sct=0, sc=8) 00:10:16.569 Read completed with error (sct=0, sc=8) 00:10:16.569 Read completed with error (sct=0, sc=8) 00:10:16.569 Read completed with error (sct=0, sc=8) 00:10:16.569 Read completed with error (sct=0, sc=8) 00:10:16.569 Read completed with error (sct=0, sc=8) 00:10:16.569 Write completed with error (sct=0, sc=8) 00:10:16.569 Read completed with error (sct=0, sc=8) 00:10:16.569 Write completed with error (sct=0, sc=8) 00:10:16.569 starting I/O failed: -6 00:10:16.569 Read completed with error (sct=0, sc=8) 00:10:16.569 Read completed with error (sct=0, sc=8) 00:10:16.569 Write completed with error (sct=0, sc=8) 00:10:16.569 Read completed with error (sct=0, sc=8) 00:10:16.569 Write completed with error (sct=0, sc=8) 00:10:16.569 Read completed with error (sct=0, sc=8) 00:10:16.569 Read completed with error (sct=0, sc=8) 00:10:16.569 Read completed with error (sct=0, sc=8) 00:10:16.569 Read completed with error (sct=0, sc=8) 00:10:16.569 Read completed with error (sct=0, sc=8) 00:10:16.569 starting I/O failed: -6 00:10:16.569 Read completed with error (sct=0, sc=8) 00:10:16.569 Write completed with error (sct=0, sc=8) 00:10:16.569 Read completed with error (sct=0, sc=8) 00:10:16.569 Write completed with error (sct=0, sc=8) 00:10:16.569 Read completed with error (sct=0, sc=8) 00:10:16.569 Read completed with error (sct=0, sc=8) 00:10:16.569 Write completed with error (sct=0, sc=8) 00:10:16.569 Read completed with error (sct=0, sc=8) 00:10:16.569 Write completed with error (sct=0, sc=8) 00:10:16.569 starting I/O failed: -6 00:10:16.569 Read completed with error (sct=0, sc=8) 00:10:16.569 Write completed with error (sct=0, sc=8) 00:10:16.569 Read completed with error (sct=0, sc=8) 00:10:16.569 Read completed with error (sct=0, sc=8) 00:10:16.569 Read completed with error (sct=0, sc=8) 00:10:16.569 Read completed with error (sct=0, sc=8) 00:10:16.569 Write completed with error (sct=0, sc=8) 00:10:16.569 Read completed with error (sct=0, sc=8) 00:10:16.569 Read completed with error (sct=0, sc=8) 00:10:16.569 Write completed with error (sct=0, sc=8) 00:10:16.569 starting I/O failed: -6 00:10:16.569 Read completed with error (sct=0, sc=8) 00:10:16.569 Read completed with error (sct=0, sc=8) 00:10:16.569 Read completed with error (sct=0, sc=8) 00:10:16.569 Read completed with error (sct=0, sc=8) 00:10:16.569 Read completed with error (sct=0, sc=8) 00:10:16.569 Read completed with error (sct=0, sc=8) 00:10:16.569 Write completed with error (sct=0, sc=8) 00:10:16.569 Read completed with error (sct=0, sc=8) 00:10:16.569 Read completed with error (sct=0, sc=8) 00:10:16.569 Read completed with error (sct=0, sc=8) 00:10:16.569 Read completed with error (sct=0, sc=8) 00:10:16.569 Read completed with error (sct=0, sc=8) 00:10:16.569 starting I/O failed: -6 00:10:16.569 Read completed with error (sct=0, sc=8) 00:10:16.569 Read completed with error (sct=0, sc=8) 00:10:16.569 Read completed with error (sct=0, sc=8) 00:10:16.569 Read completed with error (sct=0, sc=8) 00:10:16.569 Read completed with error (sct=0, sc=8) 00:10:16.569 Read completed with error (sct=0, sc=8) 00:10:16.569 Read completed with error (sct=0, sc=8) 00:10:16.569 Read completed with error (sct=0, sc=8) 00:10:16.569 Read completed with error (sct=0, sc=8) 00:10:16.569 Read completed with error (sct=0, sc=8) 00:10:16.569 starting I/O failed: -6 00:10:16.569 Read completed with error (sct=0, sc=8) 00:10:16.569 Read completed with error (sct=0, sc=8) 00:10:16.569 Read completed with error (sct=0, sc=8) 00:10:16.569 Read completed with error (sct=0, sc=8) 00:10:16.569 Read completed with error (sct=0, sc=8) 00:10:16.569 Read completed with error (sct=0, sc=8) 00:10:16.569 Read completed with error (sct=0, sc=8) 00:10:16.569 Read completed with error (sct=0, sc=8) 00:10:16.569 Read completed with error (sct=0, sc=8) 00:10:16.569 Write completed with error (sct=0, sc=8) 00:10:16.569 starting I/O failed: -6 00:10:16.569 Read completed with error (sct=0, sc=8) 00:10:16.569 Read completed with error (sct=0, sc=8) 00:10:16.569 Write completed with error (sct=0, sc=8) 00:10:16.569 Read completed with error (sct=0, sc=8) 00:10:16.569 Write completed with error (sct=0, sc=8) 00:10:16.569 Read completed with error (sct=0, sc=8) 00:10:16.569 Read completed with error (sct=0, sc=8) 00:10:16.569 Read completed with error (sct=0, sc=8) 00:10:16.569 Write completed with error (sct=0, sc=8) 00:10:16.569 Read completed with error (sct=0, sc=8) 00:10:16.569 starting I/O failed: -6 00:10:16.569 Read completed with error (sct=0, sc=8) 00:10:16.569 Read completed with error (sct=0, sc=8) 00:10:16.569 Write completed with error (sct=0, sc=8) 00:10:16.569 Write completed with error (sct=0, sc=8) 00:10:16.569 Read completed with error (sct=0, sc=8) 00:10:16.569 Read completed with error (sct=0, sc=8) 00:10:16.569 Write completed with error (sct=0, sc=8) 00:10:16.569 starting I/O failed: -6 00:10:16.569 Read completed with error (sct=0, sc=8) 00:10:16.569 Write completed with error (sct=0, sc=8) 00:10:16.569 Read completed with error (sct=0, sc=8) 00:10:16.569 Write completed with error (sct=0, sc=8) 00:10:16.569 starting I/O failed: -6 00:10:16.569 Write completed with error (sct=0, sc=8) 00:10:16.569 [2024-07-22 18:17:28.366226] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000010480 is same with the state(5) to be set 00:10:16.569 Read completed with error (sct=0, sc=8) 00:10:16.569 Read completed with error (sct=0, sc=8) 00:10:16.569 Read completed with error (sct=0, sc=8) 00:10:16.569 Write completed with error (sct=0, sc=8) 00:10:16.569 Read completed with error (sct=0, sc=8) 00:10:16.569 Read completed with error (sct=0, sc=8) 00:10:16.569 Read completed with error (sct=0, sc=8) 00:10:16.569 Read completed with error (sct=0, sc=8) 00:10:16.569 Read completed with error (sct=0, sc=8) 00:10:16.569 Read completed with error (sct=0, sc=8) 00:10:16.569 Read completed with error (sct=0, sc=8) 00:10:16.570 Read completed with error (sct=0, sc=8) 00:10:16.570 Write completed with error (sct=0, sc=8) 00:10:16.570 Read completed with error (sct=0, sc=8) 00:10:16.570 Read completed with error (sct=0, sc=8) 00:10:16.570 Read completed with error (sct=0, sc=8) 00:10:16.570 Read completed with error (sct=0, sc=8) 00:10:16.570 Write completed with error (sct=0, sc=8) 00:10:16.570 Read completed with error (sct=0, sc=8) 00:10:16.570 Write completed with error (sct=0, sc=8) 00:10:16.570 Read completed with error (sct=0, sc=8) 00:10:16.570 Read completed with error (sct=0, sc=8) 00:10:16.570 Write completed with error (sct=0, sc=8) 00:10:16.570 Write completed with error (sct=0, sc=8) 00:10:16.570 Read completed with error (sct=0, sc=8) 00:10:16.570 Read completed with error (sct=0, sc=8) 00:10:16.570 Write completed with error (sct=0, sc=8) 00:10:16.570 Read completed with error (sct=0, sc=8) 00:10:16.570 Write completed with error (sct=0, sc=8) 00:10:16.570 Read completed with error (sct=0, sc=8) 00:10:16.570 Read completed with error (sct=0, sc=8) 00:10:16.570 Read completed with error (sct=0, sc=8) 00:10:16.570 Read completed with error (sct=0, sc=8) 00:10:16.570 Read completed with error (sct=0, sc=8) 00:10:16.570 Read completed with error (sct=0, sc=8) 00:10:16.570 Read completed with error (sct=0, sc=8) 00:10:16.570 Read completed with error (sct=0, sc=8) 00:10:16.570 Write completed with error (sct=0, sc=8) 00:10:16.570 Read completed with error (sct=0, sc=8) 00:10:16.570 Read completed with error (sct=0, sc=8) 00:10:16.570 Read completed with error (sct=0, sc=8) 00:10:16.570 Write completed with error (sct=0, sc=8) 00:10:16.570 Write completed with error (sct=0, sc=8) 00:10:16.570 Write completed with error (sct=0, sc=8) 00:10:16.570 Read completed with error (sct=0, sc=8) 00:10:16.570 Read completed with error (sct=0, sc=8) 00:10:16.570 Read completed with error (sct=0, sc=8) 00:10:16.570 Write completed with error (sct=0, sc=8) 00:10:16.570 Read completed with error (sct=0, sc=8) 00:10:16.570 Write completed with error (sct=0, sc=8) 00:10:16.570 Read completed with error (sct=0, sc=8) 00:10:16.570 Read completed with error (sct=0, sc=8) 00:10:16.570 Read completed with error (sct=0, sc=8) 00:10:16.570 Read completed with error (sct=0, sc=8) 00:10:16.570 Write completed with error (sct=0, sc=8) 00:10:16.570 Read completed with error (sct=0, sc=8) 00:10:16.570 Read completed with error (sct=0, sc=8) 00:10:16.570 Read completed with error (sct=0, sc=8) 00:10:16.570 Write completed with error (sct=0, sc=8) 00:10:16.570 Write completed with error (sct=0, sc=8) 00:10:17.504 [2024-07-22 18:17:29.321886] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500000f080 is same with the state(5) to be set 00:10:17.504 Write completed with error (sct=0, sc=8) 00:10:17.504 Read completed with error (sct=0, sc=8) 00:10:17.504 Read completed with error (sct=0, sc=8) 00:10:17.504 Write completed with error (sct=0, sc=8) 00:10:17.504 Read completed with error (sct=0, sc=8) 00:10:17.504 Read completed with error (sct=0, sc=8) 00:10:17.504 Read completed with error (sct=0, sc=8) 00:10:17.504 Write completed with error (sct=0, sc=8) 00:10:17.504 Write completed with error (sct=0, sc=8) 00:10:17.504 Read completed with error (sct=0, sc=8) 00:10:17.504 Read completed with error (sct=0, sc=8) 00:10:17.504 Read completed with error (sct=0, sc=8) 00:10:17.504 Read completed with error (sct=0, sc=8) 00:10:17.504 Write completed with error (sct=0, sc=8) 00:10:17.504 Read completed with error (sct=0, sc=8) 00:10:17.504 Read completed with error (sct=0, sc=8) 00:10:17.504 Write completed with error (sct=0, sc=8) 00:10:17.504 Read completed with error (sct=0, sc=8) 00:10:17.504 Read completed with error (sct=0, sc=8) 00:10:17.504 Write completed with error (sct=0, sc=8) 00:10:17.504 Read completed with error (sct=0, sc=8) 00:10:17.504 Write completed with error (sct=0, sc=8) 00:10:17.504 Read completed with error (sct=0, sc=8) 00:10:17.504 Read completed with error (sct=0, sc=8) 00:10:17.504 Read completed with error (sct=0, sc=8) 00:10:17.504 Read completed with error (sct=0, sc=8) 00:10:17.504 Write completed with error (sct=0, sc=8) 00:10:17.504 Write completed with error (sct=0, sc=8) 00:10:17.504 [2024-07-22 18:17:29.363992] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500000f580 is same with the state(5) to be set 00:10:17.504 Read completed with error (sct=0, sc=8) 00:10:17.504 Read completed with error (sct=0, sc=8) 00:10:17.504 Write completed with error (sct=0, sc=8) 00:10:17.504 Read completed with error (sct=0, sc=8) 00:10:17.504 Write completed with error (sct=0, sc=8) 00:10:17.504 Read completed with error (sct=0, sc=8) 00:10:17.504 Write completed with error (sct=0, sc=8) 00:10:17.504 Read completed with error (sct=0, sc=8) 00:10:17.504 Read completed with error (sct=0, sc=8) 00:10:17.504 Read completed with error (sct=0, sc=8) 00:10:17.504 Read completed with error (sct=0, sc=8) 00:10:17.504 Read completed with error (sct=0, sc=8) 00:10:17.504 Read completed with error (sct=0, sc=8) 00:10:17.504 Read completed with error (sct=0, sc=8) 00:10:17.504 Read completed with error (sct=0, sc=8) 00:10:17.504 Read completed with error (sct=0, sc=8) 00:10:17.504 Read completed with error (sct=0, sc=8) 00:10:17.504 Read completed with error (sct=0, sc=8) 00:10:17.504 Read completed with error (sct=0, sc=8) 00:10:17.504 Read completed with error (sct=0, sc=8) 00:10:17.504 Read completed with error (sct=0, sc=8) 00:10:17.504 Read completed with error (sct=0, sc=8) 00:10:17.504 Write completed with error (sct=0, sc=8) 00:10:17.504 Read completed with error (sct=0, sc=8) 00:10:17.504 Read completed with error (sct=0, sc=8) 00:10:17.504 Read completed with error (sct=0, sc=8) 00:10:17.504 Read completed with error (sct=0, sc=8) 00:10:17.504 Write completed with error (sct=0, sc=8) 00:10:17.505 [2024-07-22 18:17:29.364893] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000010200 is same with the state(5) to be set 00:10:17.505 Read completed with error (sct=0, sc=8) 00:10:17.505 Write completed with error (sct=0, sc=8) 00:10:17.505 Read completed with error (sct=0, sc=8) 00:10:17.505 Read completed with error (sct=0, sc=8) 00:10:17.505 Read completed with error (sct=0, sc=8) 00:10:17.505 Read completed with error (sct=0, sc=8) 00:10:17.505 Write completed with error (sct=0, sc=8) 00:10:17.505 Read completed with error (sct=0, sc=8) 00:10:17.505 Write completed with error (sct=0, sc=8) 00:10:17.505 Read completed with error (sct=0, sc=8) 00:10:17.505 Write completed with error (sct=0, sc=8) 00:10:17.505 Read completed with error (sct=0, sc=8) 00:10:17.505 Read completed with error (sct=0, sc=8) 00:10:17.505 Read completed with error (sct=0, sc=8) 00:10:17.505 Read completed with error (sct=0, sc=8) 00:10:17.505 Write completed with error (sct=0, sc=8) 00:10:17.505 Read completed with error (sct=0, sc=8) 00:10:17.505 Read completed with error (sct=0, sc=8) 00:10:17.505 Write completed with error (sct=0, sc=8) 00:10:17.505 Read completed with error (sct=0, sc=8) 00:10:17.505 Read completed with error (sct=0, sc=8) 00:10:17.505 Read completed with error (sct=0, sc=8) 00:10:17.505 Write completed with error (sct=0, sc=8) 00:10:17.505 Write completed with error (sct=0, sc=8) 00:10:17.505 Read completed with error (sct=0, sc=8) 00:10:17.505 Write completed with error (sct=0, sc=8) 00:10:17.505 Read completed with error (sct=0, sc=8) 00:10:17.505 Write completed with error (sct=0, sc=8) 00:10:17.505 [2024-07-22 18:17:29.366013] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500000fa80 is same with the state(5) to be set 00:10:17.505 Read completed with error (sct=0, sc=8) 00:10:17.505 Read completed with error (sct=0, sc=8) 00:10:17.505 Read completed with error (sct=0, sc=8) 00:10:17.505 Write completed with error (sct=0, sc=8) 00:10:17.505 Read completed with error (sct=0, sc=8) 00:10:17.505 Read completed with error (sct=0, sc=8) 00:10:17.505 Write completed with error (sct=0, sc=8) 00:10:17.505 Read completed with error (sct=0, sc=8) 00:10:17.505 Write completed with error (sct=0, sc=8) 00:10:17.505 Read completed with error (sct=0, sc=8) 00:10:17.505 Read completed with error (sct=0, sc=8) 00:10:17.505 Write completed with error (sct=0, sc=8) 00:10:17.505 Read completed with error (sct=0, sc=8) 00:10:17.505 Read completed with error (sct=0, sc=8) 00:10:17.505 Read completed with error (sct=0, sc=8) 00:10:17.505 Read completed with error (sct=0, sc=8) 00:10:17.505 Write completed with error (sct=0, sc=8) 00:10:17.505 Read completed with error (sct=0, sc=8) 00:10:17.505 Read completed with error (sct=0, sc=8) 00:10:17.505 Write completed with error (sct=0, sc=8) 00:10:17.505 Write completed with error (sct=0, sc=8) 00:10:17.505 Write completed with error (sct=0, sc=8) 00:10:17.505 Write completed with error (sct=0, sc=8) 00:10:17.505 Write completed with error (sct=0, sc=8) 00:10:17.505 Read completed with error (sct=0, sc=8) 00:10:17.505 Read completed with error (sct=0, sc=8) 00:10:17.505 Write completed with error (sct=0, sc=8) 00:10:17.505 Read completed with error (sct=0, sc=8) 00:10:17.505 [2024-07-22 18:17:29.367714] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000010700 is same with the state(5) to be set 00:10:17.505 18:17:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:17.505 Initializing NVMe Controllers 00:10:17.505 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:17.505 Controller IO queue size 128, less than required. 00:10:17.505 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:17.505 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:10:17.505 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:10:17.505 Initialization complete. Launching workers. 00:10:17.505 ======================================================== 00:10:17.505 Latency(us) 00:10:17.505 Device Information : IOPS MiB/s Average min max 00:10:17.505 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 175.63 0.09 886486.78 1293.39 1023012.06 00:10:17.505 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 175.63 0.09 886927.86 640.16 1023500.08 00:10:17.505 ======================================================== 00:10:17.505 Total : 351.25 0.17 886707.32 640.16 1023500.08 00:10:17.505 00:10:17.505 18:17:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:10:17.505 18:17:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 69970 00:10:17.505 18:17:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:10:17.505 [2024-07-22 18:17:29.373044] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500000f080 (9): Bad file descriptor 00:10:17.505 /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf: errors occurred 00:10:18.072 18:17:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:10:18.072 18:17:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 69970 00:10:18.072 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (69970) - No such process 00:10:18.072 18:17:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 69970 00:10:18.072 18:17:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:10:18.072 18:17:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 69970 00:10:18.072 18:17:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:10:18.072 18:17:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:18.072 18:17:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:10:18.072 18:17:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:18.072 18:17:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 69970 00:10:18.072 18:17:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:10:18.072 18:17:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:18.072 18:17:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:18.072 18:17:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:18.072 18:17:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:18.072 18:17:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:18.072 18:17:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:18.072 18:17:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:18.072 18:17:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:18.072 18:17:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:18.072 18:17:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:18.072 [2024-07-22 18:17:29.895599] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:18.072 18:17:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:18.072 18:17:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:18.072 18:17:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:18.072 18:17:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:18.072 18:17:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:18.072 18:17:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=70015 00:10:18.072 18:17:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:10:18.072 18:17:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:10:18.072 18:17:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 70015 00:10:18.072 18:17:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:18.331 [2024-07-22 18:17:30.129411] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:10:18.589 18:17:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:18.589 18:17:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 70015 00:10:18.589 18:17:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:19.155 18:17:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:19.155 18:17:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 70015 00:10:19.155 18:17:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:19.414 18:17:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:19.414 18:17:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 70015 00:10:19.414 18:17:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:20.347 18:17:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:20.347 18:17:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 70015 00:10:20.347 18:17:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:20.605 18:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:20.605 18:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 70015 00:10:20.605 18:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:21.172 18:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:21.172 18:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 70015 00:10:21.172 18:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:21.430 Initializing NVMe Controllers 00:10:21.430 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:21.430 Controller IO queue size 128, less than required. 00:10:21.430 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:21.430 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:10:21.430 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:10:21.430 Initialization complete. Launching workers. 00:10:21.430 ======================================================== 00:10:21.430 Latency(us) 00:10:21.430 Device Information : IOPS MiB/s Average min max 00:10:21.430 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1008578.02 1000294.03 1018515.59 00:10:21.430 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1007557.57 1000265.93 1018461.25 00:10:21.430 ======================================================== 00:10:21.430 Total : 256.00 0.12 1008067.80 1000265.93 1018515.59 00:10:21.430 00:10:21.430 18:17:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:21.430 18:17:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 70015 00:10:21.430 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (70015) - No such process 00:10:21.430 18:17:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 70015 00:10:21.430 18:17:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:10:21.430 18:17:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:10:21.430 18:17:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:21.430 18:17:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:10:21.687 18:17:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:21.687 18:17:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:10:21.687 18:17:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:21.687 18:17:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:21.687 rmmod nvme_tcp 00:10:21.687 rmmod nvme_fabrics 00:10:21.687 rmmod nvme_keyring 00:10:21.687 18:17:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:21.687 18:17:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:10:21.687 18:17:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:10:21.687 18:17:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 69913 ']' 00:10:21.687 18:17:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 69913 00:10:21.687 18:17:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # '[' -z 69913 ']' 00:10:21.687 18:17:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # kill -0 69913 00:10:21.688 18:17:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # uname 00:10:21.688 18:17:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:21.688 18:17:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 69913 00:10:21.688 18:17:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:21.688 18:17:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:21.688 18:17:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 69913' 00:10:21.688 killing process with pid 69913 00:10:21.688 18:17:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # kill 69913 00:10:21.688 18:17:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # wait 69913 00:10:23.066 18:17:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:23.066 18:17:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:23.066 18:17:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:23.066 18:17:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:23.066 18:17:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:23.066 18:17:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:23.066 18:17:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:23.066 18:17:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:23.066 18:17:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:23.066 00:10:23.066 real 0m10.434s 00:10:23.066 user 0m30.335s 00:10:23.066 sys 0m1.657s 00:10:23.066 18:17:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:23.066 18:17:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:23.066 ************************************ 00:10:23.066 END TEST nvmf_delete_subsystem 00:10:23.066 ************************************ 00:10:23.066 18:17:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:10:23.066 18:17:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:10:23.066 18:17:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:23.066 18:17:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:23.066 18:17:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:23.066 ************************************ 00:10:23.066 START TEST nvmf_host_management 00:10:23.066 ************************************ 00:10:23.066 18:17:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:10:23.066 * Looking for test storage... 00:10:23.066 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:23.066 18:17:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:23.066 18:17:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:10:23.066 18:17:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:23.066 18:17:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:23.066 18:17:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:23.066 18:17:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:23.066 18:17:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:23.066 18:17:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:23.066 18:17:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:23.066 18:17:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:23.066 18:17:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:23.066 18:17:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:23.066 18:17:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:10:23.066 18:17:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=0b8484e2-e129-4a11-8748-0b3c728771da 00:10:23.066 18:17:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:23.066 18:17:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:23.066 18:17:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:23.066 18:17:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:23.066 18:17:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:23.066 18:17:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:23.066 18:17:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:23.066 18:17:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:23.066 18:17:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.066 18:17:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.066 18:17:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.066 18:17:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:10:23.067 18:17:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.067 18:17:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:10:23.067 18:17:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:23.067 18:17:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:23.067 18:17:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:23.067 18:17:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:23.067 18:17:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:23.067 18:17:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:23.067 18:17:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:23.067 18:17:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:23.067 18:17:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:23.067 18:17:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:23.067 18:17:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:10:23.067 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:23.067 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:23.067 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:23.067 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:23.067 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:23.067 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:23.067 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:23.067 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:23.067 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:23.067 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:23.067 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:23.067 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:23.067 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:23.067 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:23.067 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:23.067 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:23.067 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:23.067 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:23.067 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:23.067 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:23.067 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:23.067 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:23.067 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:23.067 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:23.067 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:23.067 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:23.067 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:23.067 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:23.067 Cannot find device "nvmf_tgt_br" 00:10:23.067 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@155 -- # true 00:10:23.067 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:23.067 Cannot find device "nvmf_tgt_br2" 00:10:23.067 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # true 00:10:23.067 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:23.067 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:23.067 Cannot find device "nvmf_tgt_br" 00:10:23.067 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@158 -- # true 00:10:23.067 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:23.325 Cannot find device "nvmf_tgt_br2" 00:10:23.325 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # true 00:10:23.325 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:23.325 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:23.325 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:23.325 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:23.325 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:10:23.325 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:23.325 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:23.325 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:10:23.325 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:23.325 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:23.325 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:23.325 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:23.325 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:23.325 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:23.325 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:23.325 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:23.325 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:23.325 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:23.325 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:23.325 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:23.325 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:23.325 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:23.325 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:23.325 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:23.325 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:23.325 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:23.325 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:23.325 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:23.621 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:23.621 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:23.621 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:23.621 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:23.621 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:23.621 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.098 ms 00:10:23.621 00:10:23.621 --- 10.0.0.2 ping statistics --- 00:10:23.621 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:23.621 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:10:23.621 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:23.621 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:23.621 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.071 ms 00:10:23.621 00:10:23.621 --- 10.0.0.3 ping statistics --- 00:10:23.621 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:23.621 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:10:23.621 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:23.621 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:23.621 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:10:23.621 00:10:23.621 --- 10.0.0.1 ping statistics --- 00:10:23.621 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:23.621 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:10:23.621 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:23.621 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@433 -- # return 0 00:10:23.621 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:23.621 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:23.621 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:23.621 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:23.621 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:23.621 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:23.621 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:23.621 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:10:23.621 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:10:23.621 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:10:23.621 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:23.621 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:23.621 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:23.621 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:10:23.621 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=70271 00:10:23.621 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 70271 00:10:23.621 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 70271 ']' 00:10:23.621 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:23.621 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:23.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:23.621 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:23.621 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:23.621 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:23.621 [2024-07-22 18:17:35.513214] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:10:23.621 [2024-07-22 18:17:35.513359] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:23.901 [2024-07-22 18:17:35.685527] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:24.160 [2024-07-22 18:17:35.972898] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:24.160 [2024-07-22 18:17:35.973019] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:24.160 [2024-07-22 18:17:35.973041] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:24.160 [2024-07-22 18:17:35.973060] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:24.160 [2024-07-22 18:17:35.973075] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:24.160 [2024-07-22 18:17:35.973404] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:24.160 [2024-07-22 18:17:35.973869] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:24.160 [2024-07-22 18:17:35.973991] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:24.160 [2024-07-22 18:17:35.974002] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:10:24.726 18:17:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:24.726 18:17:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:10:24.726 18:17:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:24.726 18:17:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:24.726 18:17:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:24.726 18:17:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:24.726 18:17:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:24.726 18:17:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.726 18:17:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:24.727 [2024-07-22 18:17:36.571877] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:24.727 18:17:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.727 18:17:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:10:24.727 18:17:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:24.727 18:17:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:24.727 18:17:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:10:24.727 18:17:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:10:24.727 18:17:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:10:24.727 18:17:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.727 18:17:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:24.727 Malloc0 00:10:24.727 [2024-07-22 18:17:36.698099] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:24.727 18:17:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.727 18:17:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:10:24.727 18:17:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:24.727 18:17:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:24.985 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:24.985 18:17:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=70343 00:10:24.985 18:17:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 70343 /var/tmp/bdevperf.sock 00:10:24.985 18:17:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 70343 ']' 00:10:24.985 18:17:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:24.985 18:17:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:24.985 18:17:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:10:24.986 18:17:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:24.986 18:17:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:24.986 18:17:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:10:24.986 18:17:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:24.986 18:17:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:10:24.986 18:17:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:10:24.986 18:17:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:24.986 18:17:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:24.986 { 00:10:24.986 "params": { 00:10:24.986 "name": "Nvme$subsystem", 00:10:24.986 "trtype": "$TEST_TRANSPORT", 00:10:24.986 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:24.986 "adrfam": "ipv4", 00:10:24.986 "trsvcid": "$NVMF_PORT", 00:10:24.986 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:24.986 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:24.986 "hdgst": ${hdgst:-false}, 00:10:24.986 "ddgst": ${ddgst:-false} 00:10:24.986 }, 00:10:24.986 "method": "bdev_nvme_attach_controller" 00:10:24.986 } 00:10:24.986 EOF 00:10:24.986 )") 00:10:24.986 18:17:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:10:24.986 18:17:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:10:24.986 18:17:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:10:24.986 18:17:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:24.986 "params": { 00:10:24.986 "name": "Nvme0", 00:10:24.986 "trtype": "tcp", 00:10:24.986 "traddr": "10.0.0.2", 00:10:24.986 "adrfam": "ipv4", 00:10:24.986 "trsvcid": "4420", 00:10:24.986 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:24.986 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:10:24.986 "hdgst": false, 00:10:24.986 "ddgst": false 00:10:24.986 }, 00:10:24.986 "method": "bdev_nvme_attach_controller" 00:10:24.986 }' 00:10:24.986 [2024-07-22 18:17:36.873403] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:10:24.986 [2024-07-22 18:17:36.873637] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70343 ] 00:10:25.244 [2024-07-22 18:17:37.053804] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:25.503 [2024-07-22 18:17:37.341819] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:26.069 Running I/O for 10 seconds... 00:10:26.069 18:17:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:26.069 18:17:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:10:26.069 18:17:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:10:26.069 18:17:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:26.069 18:17:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:26.069 18:17:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:26.069 18:17:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:26.069 18:17:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:10:26.070 18:17:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:10:26.070 18:17:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:10:26.070 18:17:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:10:26.070 18:17:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:10:26.070 18:17:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:10:26.070 18:17:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:10:26.070 18:17:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:10:26.070 18:17:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:26.070 18:17:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:26.070 18:17:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:10:26.070 18:17:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:26.070 18:17:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:10:26.070 18:17:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:10:26.070 18:17:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:10:26.329 18:17:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:10:26.329 18:17:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:10:26.329 18:17:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:10:26.329 18:17:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:10:26.329 18:17:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:26.329 18:17:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:26.329 18:17:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:26.329 18:17:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=451 00:10:26.329 18:17:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 451 -ge 100 ']' 00:10:26.329 18:17:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:10:26.329 18:17:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:10:26.329 18:17:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:10:26.329 18:17:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:10:26.329 18:17:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:26.329 18:17:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:26.329 [2024-07-22 18:17:38.224347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:72704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:26.329 [2024-07-22 18:17:38.224586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:26.329 [2024-07-22 18:17:38.224797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:72832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:26.329 [2024-07-22 18:17:38.224974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:26.329 [2024-07-22 18:17:38.225162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:72960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:26.329 [2024-07-22 18:17:38.225357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:26.329 [2024-07-22 18:17:38.225500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:73088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:26.329 [2024-07-22 18:17:38.225637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:26.329 [2024-07-22 18:17:38.225801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:73216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:26.329 [2024-07-22 18:17:38.225972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:26.329 [2024-07-22 18:17:38.226134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:73344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:26.329 [2024-07-22 18:17:38.226156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:26.329 [2024-07-22 18:17:38.226175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:73472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:26.329 [2024-07-22 18:17:38.226189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:26.329 [2024-07-22 18:17:38.226206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:73600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:26.330 [2024-07-22 18:17:38.226220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:26.330 [2024-07-22 18:17:38.226237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:65536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:26.330 [2024-07-22 18:17:38.226251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:26.330 [2024-07-22 18:17:38.226269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:65664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:26.330 [2024-07-22 18:17:38.226283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:26.330 [2024-07-22 18:17:38.226301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:65792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:26.330 [2024-07-22 18:17:38.226315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:26.330 [2024-07-22 18:17:38.226332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:65920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:26.330 [2024-07-22 18:17:38.226346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:26.330 [2024-07-22 18:17:38.226364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:66048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:26.330 [2024-07-22 18:17:38.226389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:26.330 [2024-07-22 18:17:38.226413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:66176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:26.330 [2024-07-22 18:17:38.226427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:26.330 [2024-07-22 18:17:38.226444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:66304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:26.330 [2024-07-22 18:17:38.226458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:26.330 [2024-07-22 18:17:38.226475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:66432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:26.330 [2024-07-22 18:17:38.226489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:26.330 [2024-07-22 18:17:38.226506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:66560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:26.330 [2024-07-22 18:17:38.226520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:26.330 [2024-07-22 18:17:38.226538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:66688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:26.330 [2024-07-22 18:17:38.226557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:26.330 [2024-07-22 18:17:38.226583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:66816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:26.330 [2024-07-22 18:17:38.226597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:26.330 [2024-07-22 18:17:38.226614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:66944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:26.330 [2024-07-22 18:17:38.226628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:26.330 [2024-07-22 18:17:38.226646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:67072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:26.330 [2024-07-22 18:17:38.226660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:26.330 [2024-07-22 18:17:38.226677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:67200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:26.330 [2024-07-22 18:17:38.226692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:26.330 [2024-07-22 18:17:38.226709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:67328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:26.330 [2024-07-22 18:17:38.226723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:26.330 [2024-07-22 18:17:38.226740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:67456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:26.330 [2024-07-22 18:17:38.226754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:26.330 [2024-07-22 18:17:38.226771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:67584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:26.330 [2024-07-22 18:17:38.226785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:26.330 [2024-07-22 18:17:38.226803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:67712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:26.330 [2024-07-22 18:17:38.226817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:26.330 [2024-07-22 18:17:38.226850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:67840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:26.330 [2024-07-22 18:17:38.226869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:26.330 [2024-07-22 18:17:38.226887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:67968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:26.330 [2024-07-22 18:17:38.226901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:26.330 [2024-07-22 18:17:38.226918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:68096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:26.330 [2024-07-22 18:17:38.226941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:26.330 [2024-07-22 18:17:38.226959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:68224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:26.330 [2024-07-22 18:17:38.226973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:26.330 [2024-07-22 18:17:38.226991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:68352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:26.330 [2024-07-22 18:17:38.227006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:26.330 [2024-07-22 18:17:38.227023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:68480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:26.330 [2024-07-22 18:17:38.227039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:26.330 [2024-07-22 18:17:38.227056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:68608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:26.330 [2024-07-22 18:17:38.227070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:26.330 [2024-07-22 18:17:38.227087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:68736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:26.330 [2024-07-22 18:17:38.227101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:26.330 [2024-07-22 18:17:38.227123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:68864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:26.330 [2024-07-22 18:17:38.227138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:26.330 [2024-07-22 18:17:38.227156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:68992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:26.330 [2024-07-22 18:17:38.227170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:26.330 [2024-07-22 18:17:38.227187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:69120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:26.330 [2024-07-22 18:17:38.227202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:26.330 [2024-07-22 18:17:38.227219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:69248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:26.330 [2024-07-22 18:17:38.227233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:26.330 [2024-07-22 18:17:38.227250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:69376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:26.330 [2024-07-22 18:17:38.227264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:26.330 [2024-07-22 18:17:38.227281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:69504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:26.330 [2024-07-22 18:17:38.227295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:26.330 [2024-07-22 18:17:38.227329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:69632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:26.330 [2024-07-22 18:17:38.227344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:26.330 [2024-07-22 18:17:38.227361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:69760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:26.330 [2024-07-22 18:17:38.227375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:26.330 [2024-07-22 18:17:38.227392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:69888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:26.330 [2024-07-22 18:17:38.227408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:26.330 [2024-07-22 18:17:38.227434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:70016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:26.330 [2024-07-22 18:17:38.227460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:26.330 [2024-07-22 18:17:38.227482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:70144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:26.330 [2024-07-22 18:17:38.227497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:26.330 [2024-07-22 18:17:38.227514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:70272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:26.330 [2024-07-22 18:17:38.227528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:26.330 [2024-07-22 18:17:38.227545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:70400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:26.331 [2024-07-22 18:17:38.227560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:26.331 [2024-07-22 18:17:38.227577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:70528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:26.331 [2024-07-22 18:17:38.227591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:26.331 [2024-07-22 18:17:38.227608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:70656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:26.331 [2024-07-22 18:17:38.227622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:26.331 [2024-07-22 18:17:38.227639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:70784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:26.331 [2024-07-22 18:17:38.227653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:26.331 [2024-07-22 18:17:38.227677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:70912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:26.331 [2024-07-22 18:17:38.227692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:26.331 [2024-07-22 18:17:38.227709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:71040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:26.331 [2024-07-22 18:17:38.227729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:26.331 [2024-07-22 18:17:38.227746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:71168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:26.331 [2024-07-22 18:17:38.227761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:26.331 [2024-07-22 18:17:38.227777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:71296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:26.331 [2024-07-22 18:17:38.227791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:26.331 [2024-07-22 18:17:38.227808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:71424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:26.331 [2024-07-22 18:17:38.227822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:26.331 [2024-07-22 18:17:38.227855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:71552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:26.331 [2024-07-22 18:17:38.227871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:26.331 [2024-07-22 18:17:38.227888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:71680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:26.331 [2024-07-22 18:17:38.227902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:26.331 [2024-07-22 18:17:38.227919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:71808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:26.331 [2024-07-22 18:17:38.227933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:26.331 [2024-07-22 18:17:38.227950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:71936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:26.331 [2024-07-22 18:17:38.227964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:26.331 [2024-07-22 18:17:38.227983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:72064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:26.331 [2024-07-22 18:17:38.227997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:26.331 [2024-07-22 18:17:38.228015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:72192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:26.331 [2024-07-22 18:17:38.228029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:26.331 [2024-07-22 18:17:38.228045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:72320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:26.331 [2024-07-22 18:17:38.228059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:26.331 [2024-07-22 18:17:38.228076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:72448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:26.331 [2024-07-22 18:17:38.228091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:26.331 [2024-07-22 18:17:38.228107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:72576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:26.331 [2024-07-22 18:17:38.228121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:26.331 [2024-07-22 18:17:38.228137] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b500 is same with the state(5) to be set 00:10:26.331 [2024-07-22 18:17:38.228413] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x61500002b500 was disconnected and freed. reset controller. 00:10:26.331 [2024-07-22 18:17:38.230119] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:10:26.331 18:17:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:26.331 18:17:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:10:26.331 18:17:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:26.331 18:17:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:26.331 task offset: 72704 on job bdev=Nvme0n1 fails 00:10:26.331 00:10:26.331 Latency(us) 00:10:26.331 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:26.331 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:10:26.331 Job: Nvme0n1 ended in about 0.45 seconds with error 00:10:26.331 Verification LBA range: start 0x0 length 0x400 00:10:26.331 Nvme0n1 : 0.45 1137.40 71.09 142.17 0.00 48374.67 5183.30 41943.04 00:10:26.331 =================================================================================================================== 00:10:26.331 Total : 1137.40 71.09 142.17 0.00 48374.67 5183.30 41943.04 00:10:26.331 [2024-07-22 18:17:38.236477] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:26.331 [2024-07-22 18:17:38.236646] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (9): Bad file descriptor 00:10:26.331 18:17:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:26.331 18:17:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:10:26.331 [2024-07-22 18:17:38.242953] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:27.269 18:17:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 70343 00:10:27.269 18:17:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:10:27.269 18:17:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:10:27.269 18:17:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:10:27.269 18:17:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:10:27.269 18:17:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:10:27.269 18:17:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:27.269 18:17:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:27.269 { 00:10:27.269 "params": { 00:10:27.269 "name": "Nvme$subsystem", 00:10:27.269 "trtype": "$TEST_TRANSPORT", 00:10:27.269 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:27.269 "adrfam": "ipv4", 00:10:27.269 "trsvcid": "$NVMF_PORT", 00:10:27.269 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:27.269 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:27.269 "hdgst": ${hdgst:-false}, 00:10:27.269 "ddgst": ${ddgst:-false} 00:10:27.269 }, 00:10:27.269 "method": "bdev_nvme_attach_controller" 00:10:27.269 } 00:10:27.269 EOF 00:10:27.269 )") 00:10:27.269 18:17:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:10:27.269 18:17:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:10:27.269 18:17:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:10:27.269 18:17:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:27.269 "params": { 00:10:27.269 "name": "Nvme0", 00:10:27.269 "trtype": "tcp", 00:10:27.269 "traddr": "10.0.0.2", 00:10:27.269 "adrfam": "ipv4", 00:10:27.269 "trsvcid": "4420", 00:10:27.269 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:27.269 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:10:27.269 "hdgst": false, 00:10:27.269 "ddgst": false 00:10:27.269 }, 00:10:27.269 "method": "bdev_nvme_attach_controller" 00:10:27.269 }' 00:10:27.535 [2024-07-22 18:17:39.374261] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:10:27.535 [2024-07-22 18:17:39.375206] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70398 ] 00:10:27.794 [2024-07-22 18:17:39.556137] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:28.052 [2024-07-22 18:17:39.812721] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:28.309 Running I/O for 1 seconds... 00:10:29.682 00:10:29.682 Latency(us) 00:10:29.682 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:29.682 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:10:29.682 Verification LBA range: start 0x0 length 0x400 00:10:29.682 Nvme0n1 : 1.04 1297.41 81.09 0.00 0.00 48399.07 8162.21 43134.60 00:10:29.682 =================================================================================================================== 00:10:29.682 Total : 1297.41 81.09 0.00 0.00 48399.07 8162.21 43134.60 00:10:30.640 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 68: 70343 Killed $rootdir/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "0") -q 64 -o 65536 -w verify -t 10 "${NO_HUGE[@]}" 00:10:30.640 18:17:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:10:30.640 18:17:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:10:30.640 18:17:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:10:30.640 18:17:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:10:30.640 18:17:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:10:30.640 18:17:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:30.640 18:17:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:10:30.640 18:17:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:30.640 18:17:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:10:30.640 18:17:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:30.640 18:17:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:30.640 rmmod nvme_tcp 00:10:30.640 rmmod nvme_fabrics 00:10:30.640 rmmod nvme_keyring 00:10:30.640 18:17:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:30.640 18:17:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:10:30.640 18:17:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:10:30.640 18:17:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 70271 ']' 00:10:30.640 18:17:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 70271 00:10:30.640 18:17:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 70271 ']' 00:10:30.640 18:17:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 70271 00:10:30.640 18:17:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:10:30.640 18:17:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:30.640 18:17:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 70271 00:10:30.640 18:17:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:30.640 18:17:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:30.640 killing process with pid 70271 00:10:30.640 18:17:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 70271' 00:10:30.640 18:17:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 70271 00:10:30.640 18:17:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 70271 00:10:32.012 [2024-07-22 18:17:43.987965] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:10:32.270 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:32.270 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:32.270 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:32.270 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:32.270 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:32.270 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:32.270 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:32.270 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:32.270 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:32.270 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:10:32.270 ************************************ 00:10:32.270 END TEST nvmf_host_management 00:10:32.270 ************************************ 00:10:32.270 00:10:32.270 real 0m9.227s 00:10:32.270 user 0m36.213s 00:10:32.270 sys 0m1.879s 00:10:32.270 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:32.270 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:32.270 18:17:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:10:32.270 18:17:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:10:32.270 18:17:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:32.270 18:17:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:32.270 18:17:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:32.271 ************************************ 00:10:32.271 START TEST nvmf_lvol 00:10:32.271 ************************************ 00:10:32.271 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:10:32.271 * Looking for test storage... 00:10:32.271 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:32.271 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:32.271 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:10:32.271 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:32.271 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:32.271 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:32.271 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:32.271 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:32.271 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:32.271 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:32.271 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:32.271 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:32.271 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:32.271 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:10:32.271 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=0b8484e2-e129-4a11-8748-0b3c728771da 00:10:32.271 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:32.271 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:32.271 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:32.271 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:32.271 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:32.271 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:32.271 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:32.271 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:32.271 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.271 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.271 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.271 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:10:32.271 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.271 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:10:32.271 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:32.271 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:32.271 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:32.271 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:32.271 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:32.271 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:32.271 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:32.271 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:32.271 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:32.271 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:32.271 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:10:32.271 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:10:32.271 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:32.271 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:10:32.271 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:32.271 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:32.271 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:32.271 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:32.271 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:32.271 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:32.271 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:32.271 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:32.271 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:32.271 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:32.271 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:32.271 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:32.271 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:32.271 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:32.271 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:32.271 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:32.271 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:32.271 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:32.271 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:32.271 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:32.271 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:32.271 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:32.271 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:32.271 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:32.271 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:32.271 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:32.271 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:32.530 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:32.530 Cannot find device "nvmf_tgt_br" 00:10:32.530 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@155 -- # true 00:10:32.530 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:32.530 Cannot find device "nvmf_tgt_br2" 00:10:32.530 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # true 00:10:32.530 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:32.530 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:32.530 Cannot find device "nvmf_tgt_br" 00:10:32.530 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@158 -- # true 00:10:32.530 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:32.530 Cannot find device "nvmf_tgt_br2" 00:10:32.530 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # true 00:10:32.530 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:32.530 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:32.530 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:32.530 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:32.530 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:10:32.530 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:32.530 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:32.530 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:10:32.530 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:32.530 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:32.530 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:32.530 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:32.530 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:32.530 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:32.530 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:32.530 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:32.530 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:32.530 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:32.530 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:32.530 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:32.530 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:32.530 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:32.530 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:32.530 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:32.531 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:32.531 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:32.531 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:32.531 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:32.789 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:32.789 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:32.789 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:32.789 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:32.789 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:32.789 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.105 ms 00:10:32.789 00:10:32.789 --- 10.0.0.2 ping statistics --- 00:10:32.789 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:32.789 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:10:32.789 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:32.789 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:32.789 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.037 ms 00:10:32.789 00:10:32.789 --- 10.0.0.3 ping statistics --- 00:10:32.789 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:32.789 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:10:32.789 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:32.789 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:32.789 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.051 ms 00:10:32.789 00:10:32.789 --- 10.0.0.1 ping statistics --- 00:10:32.789 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:32.789 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:10:32.789 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:32.789 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@433 -- # return 0 00:10:32.789 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:32.789 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:32.789 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:32.789 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:32.789 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:32.789 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:32.789 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:32.789 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:10:32.789 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:32.789 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:32.789 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:32.789 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=70646 00:10:32.789 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 70646 00:10:32.789 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:10:32.789 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 70646 ']' 00:10:32.789 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:32.789 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:32.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:32.789 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:32.789 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:32.789 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:32.790 [2024-07-22 18:17:44.738686] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:10:32.790 [2024-07-22 18:17:44.738871] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:33.048 [2024-07-22 18:17:44.906360] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:33.307 [2024-07-22 18:17:45.152274] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:33.307 [2024-07-22 18:17:45.152341] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:33.307 [2024-07-22 18:17:45.152359] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:33.307 [2024-07-22 18:17:45.152375] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:33.307 [2024-07-22 18:17:45.152387] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:33.308 [2024-07-22 18:17:45.152588] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:33.308 [2024-07-22 18:17:45.153026] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:33.308 [2024-07-22 18:17:45.153271] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:33.874 18:17:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:33.874 18:17:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:10:33.874 18:17:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:33.874 18:17:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:33.874 18:17:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:33.874 18:17:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:33.874 18:17:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:34.133 [2024-07-22 18:17:45.977095] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:34.133 18:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:34.391 18:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:10:34.391 18:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:34.649 18:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:10:34.649 18:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:10:34.908 18:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:10:35.165 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=cdee3d9a-ec08-4299-b3a2-b217ecf400a0 00:10:35.165 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u cdee3d9a-ec08-4299-b3a2-b217ecf400a0 lvol 20 00:10:35.423 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=1fe96596-d957-4e3a-b14b-65ce5518564c 00:10:35.423 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:35.680 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 1fe96596-d957-4e3a-b14b-65ce5518564c 00:10:35.937 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:10:36.194 [2024-07-22 18:17:48.206703] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:36.451 18:17:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:36.708 18:17:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=70799 00:10:36.708 18:17:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:10:36.708 18:17:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:10:37.642 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 1fe96596-d957-4e3a-b14b-65ce5518564c MY_SNAPSHOT 00:10:37.900 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=8921027a-3aa0-4add-8ceb-e7495e76b1d0 00:10:37.900 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 1fe96596-d957-4e3a-b14b-65ce5518564c 30 00:10:38.158 18:17:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 8921027a-3aa0-4add-8ceb-e7495e76b1d0 MY_CLONE 00:10:38.725 18:17:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=c8381a7d-a309-4422-b60e-d70561d3b118 00:10:38.725 18:17:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate c8381a7d-a309-4422-b60e-d70561d3b118 00:10:39.290 18:17:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 70799 00:10:47.403 Initializing NVMe Controllers 00:10:47.403 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:10:47.403 Controller IO queue size 128, less than required. 00:10:47.403 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:47.403 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:10:47.403 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:10:47.403 Initialization complete. Launching workers. 00:10:47.403 ======================================================== 00:10:47.403 Latency(us) 00:10:47.403 Device Information : IOPS MiB/s Average min max 00:10:47.403 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 7694.60 30.06 16637.28 286.08 180536.34 00:10:47.403 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 7555.30 29.51 16954.00 4382.98 197385.73 00:10:47.403 ======================================================== 00:10:47.403 Total : 15249.90 59.57 16794.19 286.08 197385.73 00:10:47.403 00:10:47.403 18:17:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:47.403 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 1fe96596-d957-4e3a-b14b-65ce5518564c 00:10:47.661 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u cdee3d9a-ec08-4299-b3a2-b217ecf400a0 00:10:47.919 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:10:47.919 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:10:47.919 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:10:47.919 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:47.919 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:10:47.919 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:47.919 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:10:47.919 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:47.919 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:47.919 rmmod nvme_tcp 00:10:47.919 rmmod nvme_fabrics 00:10:47.919 rmmod nvme_keyring 00:10:47.919 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:47.919 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:10:47.919 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:10:47.919 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 70646 ']' 00:10:47.919 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 70646 00:10:47.919 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 70646 ']' 00:10:47.919 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 70646 00:10:47.919 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:10:47.919 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:47.919 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 70646 00:10:47.919 killing process with pid 70646 00:10:47.919 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:47.919 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:47.919 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 70646' 00:10:47.919 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 70646 00:10:47.919 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 70646 00:10:49.819 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:49.819 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:49.819 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:49.819 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:49.819 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:49.819 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:49.819 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:49.819 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:49.819 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:49.819 00:10:49.819 real 0m17.348s 00:10:49.819 user 1m9.752s 00:10:49.819 sys 0m3.696s 00:10:49.819 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:49.819 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:49.819 ************************************ 00:10:49.819 END TEST nvmf_lvol 00:10:49.819 ************************************ 00:10:49.819 18:18:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:10:49.819 18:18:01 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:10:49.819 18:18:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:49.819 18:18:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:49.819 18:18:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:49.819 ************************************ 00:10:49.819 START TEST nvmf_lvs_grow 00:10:49.819 ************************************ 00:10:49.819 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:10:49.819 * Looking for test storage... 00:10:49.819 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:49.819 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:49.819 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:10:49.819 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:49.819 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:49.819 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:49.819 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:49.819 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:49.819 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:49.819 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:49.819 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:49.819 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:49.819 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:49.819 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:10:49.819 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=0b8484e2-e129-4a11-8748-0b3c728771da 00:10:49.819 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:49.819 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:49.819 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:49.819 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:49.819 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:49.819 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:49.819 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:49.819 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:49.819 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:49.819 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:49.820 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:49.820 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:10:49.820 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:49.820 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:10:49.820 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:49.820 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:49.820 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:49.820 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:49.820 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:49.820 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:49.820 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:49.820 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:49.820 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:49.820 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:49.820 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:10:49.820 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:49.820 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:49.820 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:49.820 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:49.820 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:49.820 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:49.820 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:49.820 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:49.820 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:49.820 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:49.820 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:49.820 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:49.820 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:49.820 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:49.820 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:49.820 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:49.820 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:49.820 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:49.820 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:49.820 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:49.820 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:49.820 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:49.820 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:49.820 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:49.820 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:49.820 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:49.820 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:49.820 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:49.820 Cannot find device "nvmf_tgt_br" 00:10:49.820 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@155 -- # true 00:10:49.820 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:49.820 Cannot find device "nvmf_tgt_br2" 00:10:49.820 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # true 00:10:49.820 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:49.820 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:49.820 Cannot find device "nvmf_tgt_br" 00:10:49.820 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@158 -- # true 00:10:49.820 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:49.820 Cannot find device "nvmf_tgt_br2" 00:10:49.820 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # true 00:10:49.820 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:49.820 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:49.820 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:50.079 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:50.079 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:10:50.079 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:50.079 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:50.079 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:10:50.079 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:50.079 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:50.079 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:50.079 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:50.079 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:50.079 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:50.079 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:50.079 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:50.079 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:50.079 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:50.079 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:50.079 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:50.079 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:50.079 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:50.079 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:50.079 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:50.079 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:50.079 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:50.079 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:50.079 18:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:50.079 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:50.079 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:50.079 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:50.079 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:50.079 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:50.079 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.119 ms 00:10:50.079 00:10:50.079 --- 10.0.0.2 ping statistics --- 00:10:50.079 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:50.079 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:10:50.079 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:50.079 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:50.079 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:10:50.079 00:10:50.079 --- 10.0.0.3 ping statistics --- 00:10:50.079 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:50.079 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:10:50.079 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:50.079 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:50.079 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.073 ms 00:10:50.079 00:10:50.079 --- 10.0.0.1 ping statistics --- 00:10:50.079 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:50.079 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:10:50.079 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:50.079 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@433 -- # return 0 00:10:50.079 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:50.079 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:50.079 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:50.079 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:50.079 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:50.079 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:50.079 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:50.079 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:10:50.079 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:50.079 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:50.079 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:50.079 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=71177 00:10:50.079 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:10:50.079 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 71177 00:10:50.079 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 71177 ']' 00:10:50.079 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:50.079 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:50.079 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:50.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:50.079 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:50.079 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:50.337 [2024-07-22 18:18:02.188230] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:10:50.337 [2024-07-22 18:18:02.188436] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:50.596 [2024-07-22 18:18:02.365807] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:50.855 [2024-07-22 18:18:02.710509] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:50.855 [2024-07-22 18:18:02.710590] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:50.855 [2024-07-22 18:18:02.710609] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:50.855 [2024-07-22 18:18:02.710625] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:50.855 [2024-07-22 18:18:02.710637] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:50.855 [2024-07-22 18:18:02.710709] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:51.419 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:51.419 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:10:51.419 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:51.419 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:51.419 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:51.419 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:51.419 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:51.419 [2024-07-22 18:18:03.399217] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:51.419 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:10:51.419 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:51.419 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:51.419 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:51.419 ************************************ 00:10:51.419 START TEST lvs_grow_clean 00:10:51.419 ************************************ 00:10:51.419 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:10:51.419 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:10:51.419 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:10:51.677 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:10:51.677 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:10:51.677 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:10:51.677 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:10:51.677 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:51.677 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:51.677 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:51.936 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:10:51.936 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:10:52.195 18:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=16d1b7af-82e8-4d4b-a812-b6e4f4f9db87 00:10:52.195 18:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 16d1b7af-82e8-4d4b-a812-b6e4f4f9db87 00:10:52.195 18:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:10:52.454 18:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:10:52.454 18:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:10:52.454 18:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 16d1b7af-82e8-4d4b-a812-b6e4f4f9db87 lvol 150 00:10:52.762 18:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=3c58d565-72d5-4d1d-8ccd-7984efff4fe2 00:10:52.763 18:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:52.763 18:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:10:53.046 [2024-07-22 18:18:04.929754] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:10:53.046 [2024-07-22 18:18:04.929946] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:10:53.046 true 00:10:53.046 18:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 16d1b7af-82e8-4d4b-a812-b6e4f4f9db87 00:10:53.046 18:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:10:53.306 18:18:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:10:53.306 18:18:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:53.565 18:18:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 3c58d565-72d5-4d1d-8ccd-7984efff4fe2 00:10:53.825 18:18:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:10:54.084 [2024-07-22 18:18:05.918549] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:54.084 18:18:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:54.343 18:18:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=71344 00:10:54.343 18:18:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:10:54.343 18:18:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:54.343 18:18:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 71344 /var/tmp/bdevperf.sock 00:10:54.343 18:18:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 71344 ']' 00:10:54.343 18:18:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:54.343 18:18:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:54.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:54.343 18:18:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:54.343 18:18:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:54.343 18:18:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:10:54.343 [2024-07-22 18:18:06.320138] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:10:54.343 [2024-07-22 18:18:06.320321] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71344 ] 00:10:54.602 [2024-07-22 18:18:06.493771] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:54.861 [2024-07-22 18:18:06.766052] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:55.429 18:18:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:55.429 18:18:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:10:55.429 18:18:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:10:55.688 Nvme0n1 00:10:55.688 18:18:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:10:55.948 [ 00:10:55.948 { 00:10:55.948 "aliases": [ 00:10:55.948 "3c58d565-72d5-4d1d-8ccd-7984efff4fe2" 00:10:55.948 ], 00:10:55.948 "assigned_rate_limits": { 00:10:55.948 "r_mbytes_per_sec": 0, 00:10:55.948 "rw_ios_per_sec": 0, 00:10:55.948 "rw_mbytes_per_sec": 0, 00:10:55.948 "w_mbytes_per_sec": 0 00:10:55.948 }, 00:10:55.948 "block_size": 4096, 00:10:55.948 "claimed": false, 00:10:55.948 "driver_specific": { 00:10:55.948 "mp_policy": "active_passive", 00:10:55.948 "nvme": [ 00:10:55.948 { 00:10:55.948 "ctrlr_data": { 00:10:55.948 "ana_reporting": false, 00:10:55.948 "cntlid": 1, 00:10:55.948 "firmware_revision": "24.09", 00:10:55.948 "model_number": "SPDK bdev Controller", 00:10:55.948 "multi_ctrlr": true, 00:10:55.948 "oacs": { 00:10:55.948 "firmware": 0, 00:10:55.948 "format": 0, 00:10:55.948 "ns_manage": 0, 00:10:55.948 "security": 0 00:10:55.948 }, 00:10:55.948 "serial_number": "SPDK0", 00:10:55.948 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:55.948 "vendor_id": "0x8086" 00:10:55.948 }, 00:10:55.948 "ns_data": { 00:10:55.948 "can_share": true, 00:10:55.948 "id": 1 00:10:55.948 }, 00:10:55.948 "trid": { 00:10:55.948 "adrfam": "IPv4", 00:10:55.948 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:55.948 "traddr": "10.0.0.2", 00:10:55.948 "trsvcid": "4420", 00:10:55.948 "trtype": "TCP" 00:10:55.948 }, 00:10:55.948 "vs": { 00:10:55.948 "nvme_version": "1.3" 00:10:55.948 } 00:10:55.948 } 00:10:55.948 ] 00:10:55.948 }, 00:10:55.948 "memory_domains": [ 00:10:55.948 { 00:10:55.948 "dma_device_id": "system", 00:10:55.948 "dma_device_type": 1 00:10:55.948 } 00:10:55.948 ], 00:10:55.948 "name": "Nvme0n1", 00:10:55.948 "num_blocks": 38912, 00:10:55.948 "product_name": "NVMe disk", 00:10:55.948 "supported_io_types": { 00:10:55.948 "abort": true, 00:10:55.948 "compare": true, 00:10:55.948 "compare_and_write": true, 00:10:55.948 "copy": true, 00:10:55.948 "flush": true, 00:10:55.948 "get_zone_info": false, 00:10:55.948 "nvme_admin": true, 00:10:55.948 "nvme_io": true, 00:10:55.948 "nvme_io_md": false, 00:10:55.948 "nvme_iov_md": false, 00:10:55.948 "read": true, 00:10:55.948 "reset": true, 00:10:55.948 "seek_data": false, 00:10:55.948 "seek_hole": false, 00:10:55.948 "unmap": true, 00:10:55.948 "write": true, 00:10:55.948 "write_zeroes": true, 00:10:55.948 "zcopy": false, 00:10:55.948 "zone_append": false, 00:10:55.948 "zone_management": false 00:10:55.948 }, 00:10:55.948 "uuid": "3c58d565-72d5-4d1d-8ccd-7984efff4fe2", 00:10:55.948 "zoned": false 00:10:55.948 } 00:10:55.948 ] 00:10:55.948 18:18:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=71387 00:10:55.948 18:18:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:55.948 18:18:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:10:56.207 Running I/O for 10 seconds... 00:10:57.141 Latency(us) 00:10:57.141 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:57.141 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:57.141 Nvme0n1 : 1.00 6285.00 24.55 0.00 0.00 0.00 0.00 0.00 00:10:57.141 =================================================================================================================== 00:10:57.141 Total : 6285.00 24.55 0.00 0.00 0.00 0.00 0.00 00:10:57.141 00:10:58.075 18:18:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 16d1b7af-82e8-4d4b-a812-b6e4f4f9db87 00:10:58.075 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:58.075 Nvme0n1 : 2.00 6145.00 24.00 0.00 0.00 0.00 0.00 0.00 00:10:58.075 =================================================================================================================== 00:10:58.075 Total : 6145.00 24.00 0.00 0.00 0.00 0.00 0.00 00:10:58.075 00:10:58.333 true 00:10:58.333 18:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 16d1b7af-82e8-4d4b-a812-b6e4f4f9db87 00:10:58.334 18:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:10:58.593 18:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:10:58.593 18:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:10:58.593 18:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 71387 00:10:59.159 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:59.159 Nvme0n1 : 3.00 6153.33 24.04 0.00 0.00 0.00 0.00 0.00 00:10:59.159 =================================================================================================================== 00:10:59.159 Total : 6153.33 24.04 0.00 0.00 0.00 0.00 0.00 00:10:59.159 00:11:00.116 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:00.116 Nvme0n1 : 4.00 6176.75 24.13 0.00 0.00 0.00 0.00 0.00 00:11:00.116 =================================================================================================================== 00:11:00.116 Total : 6176.75 24.13 0.00 0.00 0.00 0.00 0.00 00:11:00.116 00:11:01.051 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:01.051 Nvme0n1 : 5.00 6186.00 24.16 0.00 0.00 0.00 0.00 0.00 00:11:01.051 =================================================================================================================== 00:11:01.051 Total : 6186.00 24.16 0.00 0.00 0.00 0.00 0.00 00:11:01.051 00:11:01.986 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:01.986 Nvme0n1 : 6.00 6167.67 24.09 0.00 0.00 0.00 0.00 0.00 00:11:01.986 =================================================================================================================== 00:11:01.986 Total : 6167.67 24.09 0.00 0.00 0.00 0.00 0.00 00:11:01.986 00:11:03.362 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:03.362 Nvme0n1 : 7.00 6165.57 24.08 0.00 0.00 0.00 0.00 0.00 00:11:03.362 =================================================================================================================== 00:11:03.362 Total : 6165.57 24.08 0.00 0.00 0.00 0.00 0.00 00:11:03.362 00:11:04.298 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:04.298 Nvme0n1 : 8.00 6154.88 24.04 0.00 0.00 0.00 0.00 0.00 00:11:04.298 =================================================================================================================== 00:11:04.298 Total : 6154.88 24.04 0.00 0.00 0.00 0.00 0.00 00:11:04.298 00:11:05.233 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:05.233 Nvme0n1 : 9.00 6159.89 24.06 0.00 0.00 0.00 0.00 0.00 00:11:05.233 =================================================================================================================== 00:11:05.233 Total : 6159.89 24.06 0.00 0.00 0.00 0.00 0.00 00:11:05.233 00:11:06.166 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:06.166 Nvme0n1 : 10.00 6139.00 23.98 0.00 0.00 0.00 0.00 0.00 00:11:06.166 =================================================================================================================== 00:11:06.166 Total : 6139.00 23.98 0.00 0.00 0.00 0.00 0.00 00:11:06.166 00:11:06.166 00:11:06.166 Latency(us) 00:11:06.166 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:06.166 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:06.166 Nvme0n1 : 10.01 6146.19 24.01 0.00 0.00 20818.46 10068.71 48377.48 00:11:06.166 =================================================================================================================== 00:11:06.166 Total : 6146.19 24.01 0.00 0.00 20818.46 10068.71 48377.48 00:11:06.166 0 00:11:06.166 18:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 71344 00:11:06.166 18:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 71344 ']' 00:11:06.166 18:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 71344 00:11:06.166 18:18:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:11:06.166 18:18:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:06.166 18:18:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 71344 00:11:06.166 18:18:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:06.166 18:18:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:06.166 killing process with pid 71344 00:11:06.166 18:18:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 71344' 00:11:06.166 Received shutdown signal, test time was about 10.000000 seconds 00:11:06.166 00:11:06.166 Latency(us) 00:11:06.166 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:06.166 =================================================================================================================== 00:11:06.166 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:06.166 18:18:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 71344 00:11:06.166 18:18:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 71344 00:11:07.539 18:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:07.539 18:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:07.797 18:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 16d1b7af-82e8-4d4b-a812-b6e4f4f9db87 00:11:07.797 18:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:11:08.055 18:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:11:08.055 18:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:11:08.055 18:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:08.313 [2024-07-22 18:18:20.221147] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:11:08.313 18:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 16d1b7af-82e8-4d4b-a812-b6e4f4f9db87 00:11:08.313 18:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:11:08.313 18:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 16d1b7af-82e8-4d4b-a812-b6e4f4f9db87 00:11:08.313 18:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:08.313 18:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:08.313 18:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:08.313 18:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:08.313 18:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:08.313 18:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:08.313 18:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:08.313 18:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:11:08.313 18:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 16d1b7af-82e8-4d4b-a812-b6e4f4f9db87 00:11:08.571 2024/07/22 18:18:20 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:16d1b7af-82e8-4d4b-a812-b6e4f4f9db87], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:11:08.571 request: 00:11:08.571 { 00:11:08.571 "method": "bdev_lvol_get_lvstores", 00:11:08.571 "params": { 00:11:08.571 "uuid": "16d1b7af-82e8-4d4b-a812-b6e4f4f9db87" 00:11:08.571 } 00:11:08.571 } 00:11:08.571 Got JSON-RPC error response 00:11:08.571 GoRPCClient: error on JSON-RPC call 00:11:08.571 18:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:11:08.571 18:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:08.571 18:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:08.571 18:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:08.571 18:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:08.830 aio_bdev 00:11:08.830 18:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 3c58d565-72d5-4d1d-8ccd-7984efff4fe2 00:11:08.830 18:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=3c58d565-72d5-4d1d-8ccd-7984efff4fe2 00:11:08.830 18:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:08.830 18:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:11:08.830 18:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:08.830 18:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:08.830 18:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:11:09.088 18:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3c58d565-72d5-4d1d-8ccd-7984efff4fe2 -t 2000 00:11:09.346 [ 00:11:09.346 { 00:11:09.346 "aliases": [ 00:11:09.346 "lvs/lvol" 00:11:09.346 ], 00:11:09.346 "assigned_rate_limits": { 00:11:09.346 "r_mbytes_per_sec": 0, 00:11:09.346 "rw_ios_per_sec": 0, 00:11:09.346 "rw_mbytes_per_sec": 0, 00:11:09.346 "w_mbytes_per_sec": 0 00:11:09.346 }, 00:11:09.346 "block_size": 4096, 00:11:09.346 "claimed": false, 00:11:09.346 "driver_specific": { 00:11:09.346 "lvol": { 00:11:09.346 "base_bdev": "aio_bdev", 00:11:09.346 "clone": false, 00:11:09.346 "esnap_clone": false, 00:11:09.346 "lvol_store_uuid": "16d1b7af-82e8-4d4b-a812-b6e4f4f9db87", 00:11:09.346 "num_allocated_clusters": 38, 00:11:09.346 "snapshot": false, 00:11:09.346 "thin_provision": false 00:11:09.346 } 00:11:09.346 }, 00:11:09.346 "name": "3c58d565-72d5-4d1d-8ccd-7984efff4fe2", 00:11:09.346 "num_blocks": 38912, 00:11:09.346 "product_name": "Logical Volume", 00:11:09.346 "supported_io_types": { 00:11:09.346 "abort": false, 00:11:09.346 "compare": false, 00:11:09.346 "compare_and_write": false, 00:11:09.346 "copy": false, 00:11:09.346 "flush": false, 00:11:09.346 "get_zone_info": false, 00:11:09.346 "nvme_admin": false, 00:11:09.346 "nvme_io": false, 00:11:09.346 "nvme_io_md": false, 00:11:09.346 "nvme_iov_md": false, 00:11:09.346 "read": true, 00:11:09.346 "reset": true, 00:11:09.346 "seek_data": true, 00:11:09.346 "seek_hole": true, 00:11:09.346 "unmap": true, 00:11:09.346 "write": true, 00:11:09.346 "write_zeroes": true, 00:11:09.346 "zcopy": false, 00:11:09.346 "zone_append": false, 00:11:09.346 "zone_management": false 00:11:09.346 }, 00:11:09.346 "uuid": "3c58d565-72d5-4d1d-8ccd-7984efff4fe2", 00:11:09.346 "zoned": false 00:11:09.346 } 00:11:09.346 ] 00:11:09.346 18:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:11:09.346 18:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 16d1b7af-82e8-4d4b-a812-b6e4f4f9db87 00:11:09.346 18:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:11:09.608 18:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:11:09.866 18:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 16d1b7af-82e8-4d4b-a812-b6e4f4f9db87 00:11:09.866 18:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:11:10.123 18:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:11:10.123 18:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 3c58d565-72d5-4d1d-8ccd-7984efff4fe2 00:11:10.381 18:18:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 16d1b7af-82e8-4d4b-a812-b6e4f4f9db87 00:11:10.639 18:18:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:10.897 18:18:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:11:11.155 ************************************ 00:11:11.155 END TEST lvs_grow_clean 00:11:11.155 ************************************ 00:11:11.155 00:11:11.155 real 0m19.703s 00:11:11.155 user 0m18.862s 00:11:11.155 sys 0m2.373s 00:11:11.155 18:18:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:11.155 18:18:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:11:11.437 18:18:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:11:11.437 18:18:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:11:11.437 18:18:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:11.437 18:18:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:11.437 18:18:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:11.437 ************************************ 00:11:11.437 START TEST lvs_grow_dirty 00:11:11.437 ************************************ 00:11:11.438 18:18:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:11:11.438 18:18:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:11:11.438 18:18:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:11:11.438 18:18:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:11:11.438 18:18:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:11:11.438 18:18:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:11:11.438 18:18:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:11:11.438 18:18:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:11:11.438 18:18:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:11:11.438 18:18:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:11.796 18:18:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:11:11.796 18:18:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:11:11.796 18:18:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=27885fd8-f7ea-488b-94fc-85b2508da1ef 00:11:11.796 18:18:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 27885fd8-f7ea-488b-94fc-85b2508da1ef 00:11:11.797 18:18:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:11:12.058 18:18:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:11:12.058 18:18:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:11:12.058 18:18:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 27885fd8-f7ea-488b-94fc-85b2508da1ef lvol 150 00:11:12.318 18:18:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=5e40155c-0b64-48ca-9aa9-92ca2b6a2a37 00:11:12.318 18:18:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:11:12.318 18:18:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:11:12.575 [2024-07-22 18:18:24.482384] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:11:12.575 [2024-07-22 18:18:24.482518] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:11:12.575 true 00:11:12.575 18:18:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 27885fd8-f7ea-488b-94fc-85b2508da1ef 00:11:12.575 18:18:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:11:12.832 18:18:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:11:12.832 18:18:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:13.090 18:18:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 5e40155c-0b64-48ca-9aa9-92ca2b6a2a37 00:11:13.654 18:18:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:11:13.654 [2024-07-22 18:18:25.631250] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:13.654 18:18:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:13.930 18:18:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=71804 00:11:13.930 18:18:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:13.930 18:18:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:11:13.930 18:18:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 71804 /var/tmp/bdevperf.sock 00:11:13.930 18:18:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 71804 ']' 00:11:13.930 18:18:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:13.930 18:18:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:13.930 18:18:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:13.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:13.930 18:18:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:13.930 18:18:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:14.196 [2024-07-22 18:18:26.031870] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:11:14.196 [2024-07-22 18:18:26.032086] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71804 ] 00:11:14.196 [2024-07-22 18:18:26.211476] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:14.761 [2024-07-22 18:18:26.493781] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:15.019 18:18:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:15.019 18:18:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:11:15.019 18:18:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:11:15.277 Nvme0n1 00:11:15.277 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:11:15.535 [ 00:11:15.535 { 00:11:15.535 "aliases": [ 00:11:15.535 "5e40155c-0b64-48ca-9aa9-92ca2b6a2a37" 00:11:15.535 ], 00:11:15.535 "assigned_rate_limits": { 00:11:15.535 "r_mbytes_per_sec": 0, 00:11:15.535 "rw_ios_per_sec": 0, 00:11:15.535 "rw_mbytes_per_sec": 0, 00:11:15.535 "w_mbytes_per_sec": 0 00:11:15.535 }, 00:11:15.535 "block_size": 4096, 00:11:15.535 "claimed": false, 00:11:15.535 "driver_specific": { 00:11:15.535 "mp_policy": "active_passive", 00:11:15.535 "nvme": [ 00:11:15.535 { 00:11:15.535 "ctrlr_data": { 00:11:15.535 "ana_reporting": false, 00:11:15.535 "cntlid": 1, 00:11:15.535 "firmware_revision": "24.09", 00:11:15.535 "model_number": "SPDK bdev Controller", 00:11:15.535 "multi_ctrlr": true, 00:11:15.535 "oacs": { 00:11:15.535 "firmware": 0, 00:11:15.535 "format": 0, 00:11:15.535 "ns_manage": 0, 00:11:15.535 "security": 0 00:11:15.535 }, 00:11:15.535 "serial_number": "SPDK0", 00:11:15.535 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:15.535 "vendor_id": "0x8086" 00:11:15.535 }, 00:11:15.535 "ns_data": { 00:11:15.535 "can_share": true, 00:11:15.535 "id": 1 00:11:15.535 }, 00:11:15.535 "trid": { 00:11:15.535 "adrfam": "IPv4", 00:11:15.535 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:15.535 "traddr": "10.0.0.2", 00:11:15.535 "trsvcid": "4420", 00:11:15.535 "trtype": "TCP" 00:11:15.535 }, 00:11:15.535 "vs": { 00:11:15.535 "nvme_version": "1.3" 00:11:15.535 } 00:11:15.535 } 00:11:15.535 ] 00:11:15.535 }, 00:11:15.535 "memory_domains": [ 00:11:15.535 { 00:11:15.535 "dma_device_id": "system", 00:11:15.535 "dma_device_type": 1 00:11:15.535 } 00:11:15.535 ], 00:11:15.535 "name": "Nvme0n1", 00:11:15.535 "num_blocks": 38912, 00:11:15.535 "product_name": "NVMe disk", 00:11:15.535 "supported_io_types": { 00:11:15.535 "abort": true, 00:11:15.535 "compare": true, 00:11:15.535 "compare_and_write": true, 00:11:15.535 "copy": true, 00:11:15.535 "flush": true, 00:11:15.535 "get_zone_info": false, 00:11:15.535 "nvme_admin": true, 00:11:15.535 "nvme_io": true, 00:11:15.535 "nvme_io_md": false, 00:11:15.535 "nvme_iov_md": false, 00:11:15.535 "read": true, 00:11:15.535 "reset": true, 00:11:15.535 "seek_data": false, 00:11:15.535 "seek_hole": false, 00:11:15.535 "unmap": true, 00:11:15.535 "write": true, 00:11:15.535 "write_zeroes": true, 00:11:15.535 "zcopy": false, 00:11:15.535 "zone_append": false, 00:11:15.535 "zone_management": false 00:11:15.535 }, 00:11:15.535 "uuid": "5e40155c-0b64-48ca-9aa9-92ca2b6a2a37", 00:11:15.535 "zoned": false 00:11:15.535 } 00:11:15.535 ] 00:11:15.535 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=71852 00:11:15.535 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:11:15.535 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:11:15.794 Running I/O for 10 seconds... 00:11:16.727 Latency(us) 00:11:16.727 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:16.727 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:16.727 Nvme0n1 : 1.00 6781.00 26.49 0.00 0.00 0.00 0.00 0.00 00:11:16.727 =================================================================================================================== 00:11:16.727 Total : 6781.00 26.49 0.00 0.00 0.00 0.00 0.00 00:11:16.727 00:11:17.661 18:18:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 27885fd8-f7ea-488b-94fc-85b2508da1ef 00:11:17.661 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:17.661 Nvme0n1 : 2.00 6437.00 25.14 0.00 0.00 0.00 0.00 0.00 00:11:17.661 =================================================================================================================== 00:11:17.661 Total : 6437.00 25.14 0.00 0.00 0.00 0.00 0.00 00:11:17.661 00:11:17.920 true 00:11:17.920 18:18:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:11:17.920 18:18:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 27885fd8-f7ea-488b-94fc-85b2508da1ef 00:11:18.177 18:18:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:11:18.177 18:18:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:11:18.177 18:18:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 71852 00:11:18.742 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:18.742 Nvme0n1 : 3.00 6440.00 25.16 0.00 0.00 0.00 0.00 0.00 00:11:18.742 =================================================================================================================== 00:11:18.742 Total : 6440.00 25.16 0.00 0.00 0.00 0.00 0.00 00:11:18.742 00:11:19.676 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:19.676 Nvme0n1 : 4.00 6243.50 24.39 0.00 0.00 0.00 0.00 0.00 00:11:19.676 =================================================================================================================== 00:11:19.676 Total : 6243.50 24.39 0.00 0.00 0.00 0.00 0.00 00:11:19.676 00:11:21.050 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:21.050 Nvme0n1 : 5.00 6258.20 24.45 0.00 0.00 0.00 0.00 0.00 00:11:21.050 =================================================================================================================== 00:11:21.050 Total : 6258.20 24.45 0.00 0.00 0.00 0.00 0.00 00:11:21.050 00:11:21.632 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:21.632 Nvme0n1 : 6.00 6251.83 24.42 0.00 0.00 0.00 0.00 0.00 00:11:21.632 =================================================================================================================== 00:11:21.632 Total : 6251.83 24.42 0.00 0.00 0.00 0.00 0.00 00:11:21.632 00:11:23.006 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:23.006 Nvme0n1 : 7.00 6241.86 24.38 0.00 0.00 0.00 0.00 0.00 00:11:23.006 =================================================================================================================== 00:11:23.006 Total : 6241.86 24.38 0.00 0.00 0.00 0.00 0.00 00:11:23.006 00:11:23.942 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:23.942 Nvme0n1 : 8.00 6217.88 24.29 0.00 0.00 0.00 0.00 0.00 00:11:23.942 =================================================================================================================== 00:11:23.942 Total : 6217.88 24.29 0.00 0.00 0.00 0.00 0.00 00:11:23.942 00:11:24.877 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:24.877 Nvme0n1 : 9.00 6165.78 24.09 0.00 0.00 0.00 0.00 0.00 00:11:24.877 =================================================================================================================== 00:11:24.877 Total : 6165.78 24.09 0.00 0.00 0.00 0.00 0.00 00:11:24.877 00:11:25.859 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:25.859 Nvme0n1 : 10.00 6105.50 23.85 0.00 0.00 0.00 0.00 0.00 00:11:25.860 =================================================================================================================== 00:11:25.860 Total : 6105.50 23.85 0.00 0.00 0.00 0.00 0.00 00:11:25.860 00:11:25.860 00:11:25.860 Latency(us) 00:11:25.860 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:25.860 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:25.860 Nvme0n1 : 10.01 6112.78 23.88 0.00 0.00 20933.14 7387.69 125829.12 00:11:25.860 =================================================================================================================== 00:11:25.860 Total : 6112.78 23.88 0.00 0.00 20933.14 7387.69 125829.12 00:11:25.860 0 00:11:25.860 18:18:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 71804 00:11:25.860 18:18:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 71804 ']' 00:11:25.860 18:18:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 71804 00:11:25.860 18:18:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:11:25.860 18:18:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:25.860 18:18:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 71804 00:11:25.860 18:18:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:25.860 18:18:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:25.860 killing process with pid 71804 00:11:25.860 18:18:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 71804' 00:11:25.860 Received shutdown signal, test time was about 10.000000 seconds 00:11:25.860 00:11:25.860 Latency(us) 00:11:25.860 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:25.860 =================================================================================================================== 00:11:25.860 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:25.860 18:18:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 71804 00:11:25.860 18:18:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 71804 00:11:27.235 18:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:27.235 18:18:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:27.493 18:18:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 27885fd8-f7ea-488b-94fc-85b2508da1ef 00:11:27.493 18:18:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:11:27.751 18:18:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:11:27.751 18:18:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:11:27.751 18:18:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 71177 00:11:27.751 18:18:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 71177 00:11:28.010 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 71177 Killed "${NVMF_APP[@]}" "$@" 00:11:28.010 18:18:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:11:28.010 18:18:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:11:28.010 18:18:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:28.010 18:18:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:28.010 18:18:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:28.010 18:18:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=72027 00:11:28.010 18:18:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 72027 00:11:28.010 18:18:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:11:28.010 18:18:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 72027 ']' 00:11:28.010 18:18:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:28.010 18:18:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:28.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:28.010 18:18:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:28.010 18:18:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:28.010 18:18:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:28.010 [2024-07-22 18:18:39.901239] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:11:28.010 [2024-07-22 18:18:39.901397] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:28.269 [2024-07-22 18:18:40.075987] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:28.528 [2024-07-22 18:18:40.355901] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:28.528 [2024-07-22 18:18:40.355977] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:28.528 [2024-07-22 18:18:40.355995] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:28.528 [2024-07-22 18:18:40.356012] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:28.528 [2024-07-22 18:18:40.356037] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:28.528 [2024-07-22 18:18:40.356094] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:29.096 18:18:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:29.096 18:18:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:11:29.096 18:18:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:29.096 18:18:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:29.096 18:18:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:29.096 18:18:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:29.096 18:18:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:29.355 [2024-07-22 18:18:41.154663] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:11:29.355 [2024-07-22 18:18:41.155043] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:11:29.355 [2024-07-22 18:18:41.155258] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:11:29.355 18:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:11:29.355 18:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 5e40155c-0b64-48ca-9aa9-92ca2b6a2a37 00:11:29.355 18:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=5e40155c-0b64-48ca-9aa9-92ca2b6a2a37 00:11:29.355 18:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:29.355 18:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:11:29.355 18:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:29.355 18:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:29.355 18:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:11:29.614 18:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5e40155c-0b64-48ca-9aa9-92ca2b6a2a37 -t 2000 00:11:29.872 [ 00:11:29.872 { 00:11:29.872 "aliases": [ 00:11:29.872 "lvs/lvol" 00:11:29.872 ], 00:11:29.872 "assigned_rate_limits": { 00:11:29.872 "r_mbytes_per_sec": 0, 00:11:29.872 "rw_ios_per_sec": 0, 00:11:29.872 "rw_mbytes_per_sec": 0, 00:11:29.872 "w_mbytes_per_sec": 0 00:11:29.872 }, 00:11:29.872 "block_size": 4096, 00:11:29.872 "claimed": false, 00:11:29.872 "driver_specific": { 00:11:29.872 "lvol": { 00:11:29.872 "base_bdev": "aio_bdev", 00:11:29.872 "clone": false, 00:11:29.872 "esnap_clone": false, 00:11:29.872 "lvol_store_uuid": "27885fd8-f7ea-488b-94fc-85b2508da1ef", 00:11:29.872 "num_allocated_clusters": 38, 00:11:29.872 "snapshot": false, 00:11:29.872 "thin_provision": false 00:11:29.872 } 00:11:29.872 }, 00:11:29.872 "name": "5e40155c-0b64-48ca-9aa9-92ca2b6a2a37", 00:11:29.872 "num_blocks": 38912, 00:11:29.872 "product_name": "Logical Volume", 00:11:29.872 "supported_io_types": { 00:11:29.872 "abort": false, 00:11:29.872 "compare": false, 00:11:29.872 "compare_and_write": false, 00:11:29.872 "copy": false, 00:11:29.872 "flush": false, 00:11:29.872 "get_zone_info": false, 00:11:29.872 "nvme_admin": false, 00:11:29.872 "nvme_io": false, 00:11:29.872 "nvme_io_md": false, 00:11:29.872 "nvme_iov_md": false, 00:11:29.872 "read": true, 00:11:29.872 "reset": true, 00:11:29.872 "seek_data": true, 00:11:29.872 "seek_hole": true, 00:11:29.872 "unmap": true, 00:11:29.872 "write": true, 00:11:29.872 "write_zeroes": true, 00:11:29.872 "zcopy": false, 00:11:29.872 "zone_append": false, 00:11:29.872 "zone_management": false 00:11:29.872 }, 00:11:29.872 "uuid": "5e40155c-0b64-48ca-9aa9-92ca2b6a2a37", 00:11:29.872 "zoned": false 00:11:29.872 } 00:11:29.872 ] 00:11:29.872 18:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:11:29.872 18:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 27885fd8-f7ea-488b-94fc-85b2508da1ef 00:11:29.872 18:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:11:30.131 18:18:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:11:30.131 18:18:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 27885fd8-f7ea-488b-94fc-85b2508da1ef 00:11:30.131 18:18:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:11:30.550 18:18:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:11:30.550 18:18:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:30.550 [2024-07-22 18:18:42.535930] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:11:30.812 18:18:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 27885fd8-f7ea-488b-94fc-85b2508da1ef 00:11:30.812 18:18:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:11:30.812 18:18:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 27885fd8-f7ea-488b-94fc-85b2508da1ef 00:11:30.812 18:18:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:30.812 18:18:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:30.812 18:18:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:30.812 18:18:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:30.812 18:18:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:30.812 18:18:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:30.812 18:18:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:30.812 18:18:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:11:30.812 18:18:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 27885fd8-f7ea-488b-94fc-85b2508da1ef 00:11:30.812 2024/07/22 18:18:42 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:27885fd8-f7ea-488b-94fc-85b2508da1ef], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:11:30.812 request: 00:11:30.812 { 00:11:30.812 "method": "bdev_lvol_get_lvstores", 00:11:30.812 "params": { 00:11:30.812 "uuid": "27885fd8-f7ea-488b-94fc-85b2508da1ef" 00:11:30.812 } 00:11:30.812 } 00:11:30.812 Got JSON-RPC error response 00:11:30.812 GoRPCClient: error on JSON-RPC call 00:11:30.812 18:18:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:11:30.812 18:18:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:30.812 18:18:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:30.812 18:18:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:30.812 18:18:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:31.070 aio_bdev 00:11:31.329 18:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 5e40155c-0b64-48ca-9aa9-92ca2b6a2a37 00:11:31.329 18:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=5e40155c-0b64-48ca-9aa9-92ca2b6a2a37 00:11:31.329 18:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:31.329 18:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:11:31.329 18:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:31.329 18:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:31.329 18:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:11:31.587 18:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5e40155c-0b64-48ca-9aa9-92ca2b6a2a37 -t 2000 00:11:31.846 [ 00:11:31.846 { 00:11:31.846 "aliases": [ 00:11:31.846 "lvs/lvol" 00:11:31.846 ], 00:11:31.846 "assigned_rate_limits": { 00:11:31.846 "r_mbytes_per_sec": 0, 00:11:31.846 "rw_ios_per_sec": 0, 00:11:31.846 "rw_mbytes_per_sec": 0, 00:11:31.846 "w_mbytes_per_sec": 0 00:11:31.846 }, 00:11:31.846 "block_size": 4096, 00:11:31.846 "claimed": false, 00:11:31.846 "driver_specific": { 00:11:31.846 "lvol": { 00:11:31.846 "base_bdev": "aio_bdev", 00:11:31.846 "clone": false, 00:11:31.846 "esnap_clone": false, 00:11:31.846 "lvol_store_uuid": "27885fd8-f7ea-488b-94fc-85b2508da1ef", 00:11:31.846 "num_allocated_clusters": 38, 00:11:31.846 "snapshot": false, 00:11:31.846 "thin_provision": false 00:11:31.846 } 00:11:31.846 }, 00:11:31.846 "name": "5e40155c-0b64-48ca-9aa9-92ca2b6a2a37", 00:11:31.846 "num_blocks": 38912, 00:11:31.846 "product_name": "Logical Volume", 00:11:31.846 "supported_io_types": { 00:11:31.846 "abort": false, 00:11:31.846 "compare": false, 00:11:31.846 "compare_and_write": false, 00:11:31.846 "copy": false, 00:11:31.846 "flush": false, 00:11:31.846 "get_zone_info": false, 00:11:31.846 "nvme_admin": false, 00:11:31.846 "nvme_io": false, 00:11:31.846 "nvme_io_md": false, 00:11:31.846 "nvme_iov_md": false, 00:11:31.846 "read": true, 00:11:31.846 "reset": true, 00:11:31.846 "seek_data": true, 00:11:31.846 "seek_hole": true, 00:11:31.846 "unmap": true, 00:11:31.846 "write": true, 00:11:31.846 "write_zeroes": true, 00:11:31.846 "zcopy": false, 00:11:31.846 "zone_append": false, 00:11:31.846 "zone_management": false 00:11:31.846 }, 00:11:31.846 "uuid": "5e40155c-0b64-48ca-9aa9-92ca2b6a2a37", 00:11:31.846 "zoned": false 00:11:31.846 } 00:11:31.846 ] 00:11:31.846 18:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:11:31.846 18:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:11:31.846 18:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 27885fd8-f7ea-488b-94fc-85b2508da1ef 00:11:32.104 18:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:11:32.104 18:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:11:32.104 18:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 27885fd8-f7ea-488b-94fc-85b2508da1ef 00:11:32.363 18:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:11:32.363 18:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 5e40155c-0b64-48ca-9aa9-92ca2b6a2a37 00:11:32.621 18:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 27885fd8-f7ea-488b-94fc-85b2508da1ef 00:11:32.880 18:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:33.138 18:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:11:33.397 ************************************ 00:11:33.397 END TEST lvs_grow_dirty 00:11:33.397 ************************************ 00:11:33.397 00:11:33.397 real 0m22.151s 00:11:33.397 user 0m47.824s 00:11:33.397 sys 0m8.084s 00:11:33.397 18:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:33.397 18:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:33.397 18:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:11:33.397 18:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:11:33.397 18:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:11:33.397 18:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:11:33.397 18:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:11:33.397 18:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:11:33.397 18:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:11:33.397 18:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:11:33.397 18:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:11:33.397 18:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:11:33.397 nvmf_trace.0 00:11:33.655 18:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:11:33.655 18:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:11:33.655 18:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:33.655 18:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:11:33.915 18:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:33.915 18:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:11:33.915 18:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:33.915 18:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:33.915 rmmod nvme_tcp 00:11:33.915 rmmod nvme_fabrics 00:11:33.915 rmmod nvme_keyring 00:11:33.915 18:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:33.915 18:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:11:33.915 18:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:11:33.915 18:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 72027 ']' 00:11:33.915 18:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 72027 00:11:33.915 18:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 72027 ']' 00:11:33.915 18:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 72027 00:11:33.915 18:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:11:33.915 18:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:33.915 18:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72027 00:11:33.915 18:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:33.915 killing process with pid 72027 00:11:33.915 18:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:33.915 18:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72027' 00:11:33.915 18:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 72027 00:11:33.915 18:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 72027 00:11:35.291 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:35.291 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:35.291 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:35.291 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:35.291 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:35.291 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:35.291 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:35.291 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:35.291 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:11:35.291 00:11:35.291 real 0m45.672s 00:11:35.291 user 1m14.459s 00:11:35.291 sys 0m11.513s 00:11:35.291 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:35.291 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:35.291 ************************************ 00:11:35.291 END TEST nvmf_lvs_grow 00:11:35.291 ************************************ 00:11:35.291 18:18:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:11:35.291 18:18:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:11:35.291 18:18:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:35.291 18:18:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:35.291 18:18:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:35.291 ************************************ 00:11:35.291 START TEST nvmf_bdev_io_wait 00:11:35.291 ************************************ 00:11:35.291 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:11:35.550 * Looking for test storage... 00:11:35.550 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:35.550 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:35.551 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:11:35.551 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:35.551 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:35.551 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:35.551 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:35.551 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:35.551 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:35.551 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:35.551 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:35.551 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:35.551 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:35.551 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:11:35.551 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=0b8484e2-e129-4a11-8748-0b3c728771da 00:11:35.551 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:35.551 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:35.551 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:35.551 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:35.551 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:35.551 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:35.551 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:35.551 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:35.551 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.551 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.551 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.551 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:11:35.551 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.551 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:11:35.551 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:35.551 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:35.551 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:35.551 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:35.551 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:35.551 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:35.551 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:35.551 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:35.551 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:35.551 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:35.551 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:11:35.551 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:35.551 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:35.551 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:35.551 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:35.551 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:35.551 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:35.551 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:35.551 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:35.551 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:11:35.551 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:11:35.551 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:11:35.551 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:11:35.551 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:11:35.551 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # nvmf_veth_init 00:11:35.551 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:35.551 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:35.551 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:35.551 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:11:35.551 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:35.551 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:35.551 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:35.551 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:35.551 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:35.551 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:35.551 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:35.551 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:35.551 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:11:35.551 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:11:35.551 Cannot find device "nvmf_tgt_br" 00:11:35.551 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # true 00:11:35.551 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:11:35.551 Cannot find device "nvmf_tgt_br2" 00:11:35.551 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # true 00:11:35.551 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:11:35.551 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:11:35.551 Cannot find device "nvmf_tgt_br" 00:11:35.551 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # true 00:11:35.551 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:11:35.551 Cannot find device "nvmf_tgt_br2" 00:11:35.551 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # true 00:11:35.551 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:11:35.551 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:11:35.551 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:35.551 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:35.551 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:11:35.551 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:35.551 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:35.551 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:11:35.551 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:11:35.551 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:35.551 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:35.551 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:35.551 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:35.810 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:35.810 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:35.810 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:35.810 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:35.810 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:11:35.810 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:11:35.810 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:11:35.810 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:11:35.810 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:35.810 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:35.810 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:35.810 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:11:35.810 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:11:35.810 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:11:35.810 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:35.810 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:35.810 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:35.810 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:35.811 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:11:35.811 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:35.811 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.094 ms 00:11:35.811 00:11:35.811 --- 10.0.0.2 ping statistics --- 00:11:35.811 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:35.811 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:11:35.811 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:11:35.811 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:35.811 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.033 ms 00:11:35.811 00:11:35.811 --- 10.0.0.3 ping statistics --- 00:11:35.811 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:35.811 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:11:35.811 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:35.811 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:35.811 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:11:35.811 00:11:35.811 --- 10.0.0.1 ping statistics --- 00:11:35.811 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:35.811 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:11:35.811 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:35.811 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@433 -- # return 0 00:11:35.811 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:35.811 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:35.811 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:35.811 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:35.811 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:35.811 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:35.811 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:35.811 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:11:35.811 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:35.811 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:35.811 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:35.811 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=72472 00:11:35.811 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 72472 00:11:35.811 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 72472 ']' 00:11:35.811 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:11:35.811 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:35.811 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:35.811 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:35.811 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:35.811 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:35.811 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:36.070 [2024-07-22 18:18:47.912388] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:11:36.070 [2024-07-22 18:18:47.912567] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:36.070 [2024-07-22 18:18:48.085349] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:36.673 [2024-07-22 18:18:48.369338] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:36.673 [2024-07-22 18:18:48.369431] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:36.673 [2024-07-22 18:18:48.369451] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:36.673 [2024-07-22 18:18:48.369467] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:36.673 [2024-07-22 18:18:48.369479] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:36.673 [2024-07-22 18:18:48.369720] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:36.673 [2024-07-22 18:18:48.370560] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:36.673 [2024-07-22 18:18:48.370677] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:36.673 [2024-07-22 18:18:48.370682] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:36.931 18:18:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:36.931 18:18:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:11:36.931 18:18:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:36.931 18:18:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:36.931 18:18:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:36.931 18:18:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:36.931 18:18:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:11:36.931 18:18:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:36.931 18:18:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:37.189 18:18:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:37.189 18:18:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:11:37.189 18:18:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:37.189 18:18:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:37.189 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:37.189 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:37.189 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:37.189 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:37.189 [2024-07-22 18:18:49.203601] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:37.448 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:37.448 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:37.448 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:37.448 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:37.448 Malloc0 00:11:37.448 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:37.448 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:37.448 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:37.448 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:37.448 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:37.448 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:37.449 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:37.449 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:37.449 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:37.449 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:37.449 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:37.449 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:37.449 [2024-07-22 18:18:49.328253] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:37.449 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:37.449 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=72529 00:11:37.449 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=72531 00:11:37.449 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:11:37.449 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:11:37.449 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:11:37.449 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:11:37.449 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:37.449 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:37.449 { 00:11:37.449 "params": { 00:11:37.449 "name": "Nvme$subsystem", 00:11:37.449 "trtype": "$TEST_TRANSPORT", 00:11:37.449 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:37.449 "adrfam": "ipv4", 00:11:37.449 "trsvcid": "$NVMF_PORT", 00:11:37.449 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:37.449 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:37.449 "hdgst": ${hdgst:-false}, 00:11:37.449 "ddgst": ${ddgst:-false} 00:11:37.449 }, 00:11:37.449 "method": "bdev_nvme_attach_controller" 00:11:37.449 } 00:11:37.449 EOF 00:11:37.449 )") 00:11:37.449 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=72532 00:11:37.449 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:11:37.449 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:11:37.449 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:11:37.449 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:11:37.449 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:11:37.449 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:37.449 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:37.449 { 00:11:37.449 "params": { 00:11:37.449 "name": "Nvme$subsystem", 00:11:37.449 "trtype": "$TEST_TRANSPORT", 00:11:37.449 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:37.449 "adrfam": "ipv4", 00:11:37.449 "trsvcid": "$NVMF_PORT", 00:11:37.449 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:37.449 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:37.449 "hdgst": ${hdgst:-false}, 00:11:37.449 "ddgst": ${ddgst:-false} 00:11:37.449 }, 00:11:37.449 "method": "bdev_nvme_attach_controller" 00:11:37.449 } 00:11:37.449 EOF 00:11:37.449 )") 00:11:37.449 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:11:37.449 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=72536 00:11:37.449 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:11:37.449 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:11:37.449 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:11:37.449 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:11:37.449 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:37.449 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:37.449 { 00:11:37.449 "params": { 00:11:37.449 "name": "Nvme$subsystem", 00:11:37.449 "trtype": "$TEST_TRANSPORT", 00:11:37.449 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:37.449 "adrfam": "ipv4", 00:11:37.449 "trsvcid": "$NVMF_PORT", 00:11:37.449 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:37.449 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:37.449 "hdgst": ${hdgst:-false}, 00:11:37.449 "ddgst": ${ddgst:-false} 00:11:37.449 }, 00:11:37.449 "method": "bdev_nvme_attach_controller" 00:11:37.449 } 00:11:37.449 EOF 00:11:37.449 )") 00:11:37.449 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:11:37.449 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:11:37.449 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:11:37.449 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:11:37.449 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:11:37.449 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:11:37.449 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:11:37.449 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:11:37.449 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:11:37.449 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:37.449 "params": { 00:11:37.449 "name": "Nvme1", 00:11:37.449 "trtype": "tcp", 00:11:37.449 "traddr": "10.0.0.2", 00:11:37.449 "adrfam": "ipv4", 00:11:37.449 "trsvcid": "4420", 00:11:37.449 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:37.449 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:37.449 "hdgst": false, 00:11:37.449 "ddgst": false 00:11:37.449 }, 00:11:37.449 "method": "bdev_nvme_attach_controller" 00:11:37.449 }' 00:11:37.449 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:37.449 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:37.449 { 00:11:37.449 "params": { 00:11:37.449 "name": "Nvme$subsystem", 00:11:37.449 "trtype": "$TEST_TRANSPORT", 00:11:37.449 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:37.449 "adrfam": "ipv4", 00:11:37.449 "trsvcid": "$NVMF_PORT", 00:11:37.449 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:37.449 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:37.449 "hdgst": ${hdgst:-false}, 00:11:37.449 "ddgst": ${ddgst:-false} 00:11:37.449 }, 00:11:37.449 "method": "bdev_nvme_attach_controller" 00:11:37.449 } 00:11:37.449 EOF 00:11:37.449 )") 00:11:37.449 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:11:37.449 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:37.449 "params": { 00:11:37.449 "name": "Nvme1", 00:11:37.449 "trtype": "tcp", 00:11:37.449 "traddr": "10.0.0.2", 00:11:37.449 "adrfam": "ipv4", 00:11:37.449 "trsvcid": "4420", 00:11:37.449 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:37.449 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:37.449 "hdgst": false, 00:11:37.449 "ddgst": false 00:11:37.449 }, 00:11:37.449 "method": "bdev_nvme_attach_controller" 00:11:37.449 }' 00:11:37.449 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:11:37.449 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:11:37.449 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:11:37.449 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:37.449 "params": { 00:11:37.449 "name": "Nvme1", 00:11:37.449 "trtype": "tcp", 00:11:37.449 "traddr": "10.0.0.2", 00:11:37.449 "adrfam": "ipv4", 00:11:37.449 "trsvcid": "4420", 00:11:37.449 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:37.449 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:37.449 "hdgst": false, 00:11:37.449 "ddgst": false 00:11:37.449 }, 00:11:37.449 "method": "bdev_nvme_attach_controller" 00:11:37.449 }' 00:11:37.449 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:11:37.449 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:11:37.449 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:37.449 "params": { 00:11:37.449 "name": "Nvme1", 00:11:37.449 "trtype": "tcp", 00:11:37.449 "traddr": "10.0.0.2", 00:11:37.449 "adrfam": "ipv4", 00:11:37.449 "trsvcid": "4420", 00:11:37.449 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:37.449 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:37.449 "hdgst": false, 00:11:37.449 "ddgst": false 00:11:37.449 }, 00:11:37.449 "method": "bdev_nvme_attach_controller" 00:11:37.449 }' 00:11:37.449 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 72529 00:11:37.708 [2024-07-22 18:18:49.474613] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:11:37.708 [2024-07-22 18:18:49.475815] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:11:37.708 [2024-07-22 18:18:49.506049] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:11:37.708 [2024-07-22 18:18:49.506233] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:11:37.708 [2024-07-22 18:18:49.511786] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:11:37.708 [2024-07-22 18:18:49.511976] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:11:37.708 [2024-07-22 18:18:49.512318] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:11:37.708 [2024-07-22 18:18:49.512468] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:11:37.708 [2024-07-22 18:18:49.717558] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:37.966 [2024-07-22 18:18:49.822212] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:37.966 [2024-07-22 18:18:49.899726] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:37.966 [2024-07-22 18:18:49.973280] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:38.224 [2024-07-22 18:18:49.995681] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:11:38.224 [2024-07-22 18:18:50.087975] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:11:38.224 [2024-07-22 18:18:50.124890] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:11:38.224 [2024-07-22 18:18:50.217979] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:11:38.482 Running I/O for 1 seconds... 00:11:38.482 Running I/O for 1 seconds... 00:11:38.739 Running I/O for 1 seconds... 00:11:38.739 Running I/O for 1 seconds... 00:11:39.727 00:11:39.727 Latency(us) 00:11:39.727 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:39.727 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:11:39.727 Nvme1n1 : 1.00 158530.49 619.26 0.00 0.00 804.41 314.65 1161.77 00:11:39.727 =================================================================================================================== 00:11:39.727 Total : 158530.49 619.26 0.00 0.00 804.41 314.65 1161.77 00:11:39.727 00:11:39.727 Latency(us) 00:11:39.727 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:39.727 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:11:39.727 Nvme1n1 : 1.01 6869.58 26.83 0.00 0.00 18531.43 3932.16 23592.96 00:11:39.727 =================================================================================================================== 00:11:39.727 Total : 6869.58 26.83 0.00 0.00 18531.43 3932.16 23592.96 00:11:39.727 00:11:39.727 Latency(us) 00:11:39.727 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:39.727 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:11:39.727 Nvme1n1 : 1.01 5069.06 19.80 0.00 0.00 25093.77 5064.15 44564.48 00:11:39.727 =================================================================================================================== 00:11:39.727 Total : 5069.06 19.80 0.00 0.00 25093.77 5064.15 44564.48 00:11:39.727 00:11:39.727 Latency(us) 00:11:39.727 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:39.727 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:11:39.727 Nvme1n1 : 1.01 5099.61 19.92 0.00 0.00 24914.99 12153.95 37176.79 00:11:39.727 =================================================================================================================== 00:11:39.728 Total : 5099.61 19.92 0.00 0.00 24914.99 12153.95 37176.79 00:11:41.102 18:18:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 72531 00:11:41.102 18:18:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 72532 00:11:41.102 18:18:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 72536 00:11:41.102 18:18:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:41.102 18:18:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:41.102 18:18:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:41.102 18:18:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:41.102 18:18:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:11:41.102 18:18:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:11:41.102 18:18:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:41.102 18:18:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:11:41.102 18:18:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:41.102 18:18:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:11:41.102 18:18:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:41.102 18:18:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:41.102 rmmod nvme_tcp 00:11:41.102 rmmod nvme_fabrics 00:11:41.102 rmmod nvme_keyring 00:11:41.102 18:18:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:41.360 18:18:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:11:41.360 18:18:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:11:41.360 18:18:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 72472 ']' 00:11:41.360 18:18:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 72472 00:11:41.360 18:18:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 72472 ']' 00:11:41.360 18:18:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 72472 00:11:41.360 18:18:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:11:41.360 18:18:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:41.360 18:18:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72472 00:11:41.360 18:18:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:41.360 18:18:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:41.360 18:18:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72472' 00:11:41.360 killing process with pid 72472 00:11:41.360 18:18:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 72472 00:11:41.360 18:18:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 72472 00:11:42.738 18:18:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:42.738 18:18:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:42.738 18:18:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:42.738 18:18:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:42.738 18:18:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:42.738 18:18:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:42.738 18:18:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:42.738 18:18:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:42.738 18:18:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:11:42.738 00:11:42.738 real 0m7.200s 00:11:42.738 user 0m33.190s 00:11:42.738 sys 0m2.817s 00:11:42.738 18:18:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:42.738 18:18:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:42.738 ************************************ 00:11:42.738 END TEST nvmf_bdev_io_wait 00:11:42.738 ************************************ 00:11:42.738 18:18:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:11:42.738 18:18:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:11:42.738 18:18:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:42.738 18:18:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:42.738 18:18:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:42.738 ************************************ 00:11:42.738 START TEST nvmf_queue_depth 00:11:42.738 ************************************ 00:11:42.738 18:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:11:42.738 * Looking for test storage... 00:11:42.738 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:42.738 18:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:42.738 18:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:11:42.738 18:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:42.738 18:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:42.738 18:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:42.738 18:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:42.738 18:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:42.738 18:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:42.738 18:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:42.738 18:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:42.738 18:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:42.738 18:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:42.738 18:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:11:42.738 18:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=0b8484e2-e129-4a11-8748-0b3c728771da 00:11:42.738 18:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:42.738 18:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:42.738 18:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:42.738 18:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:42.738 18:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:42.738 18:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:42.738 18:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:42.738 18:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:42.738 18:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.738 18:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.738 18:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.738 18:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:11:42.738 18:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.738 18:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:11:42.738 18:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:42.738 18:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:42.738 18:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:42.738 18:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:42.738 18:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:42.738 18:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:42.738 18:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:42.738 18:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:42.738 18:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:11:42.738 18:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:11:42.738 18:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:42.738 18:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:11:42.738 18:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:42.738 18:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:42.738 18:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:42.738 18:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:42.738 18:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:42.738 18:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:42.738 18:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:42.738 18:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:42.738 18:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:11:42.738 18:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:11:42.738 18:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:11:42.738 18:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:11:42.738 18:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:11:42.738 18:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # nvmf_veth_init 00:11:42.739 18:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:42.739 18:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:42.739 18:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:42.739 18:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:11:42.739 18:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:42.739 18:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:42.739 18:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:42.739 18:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:42.739 18:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:42.739 18:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:42.739 18:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:42.739 18:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:42.739 18:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:11:42.739 18:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:11:42.739 Cannot find device "nvmf_tgt_br" 00:11:42.739 18:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@155 -- # true 00:11:42.739 18:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:11:42.739 Cannot find device "nvmf_tgt_br2" 00:11:42.739 18:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # true 00:11:42.739 18:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:11:42.739 18:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:11:42.739 Cannot find device "nvmf_tgt_br" 00:11:42.739 18:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@158 -- # true 00:11:42.739 18:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:11:42.739 Cannot find device "nvmf_tgt_br2" 00:11:42.739 18:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # true 00:11:42.739 18:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:11:43.004 18:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:11:43.004 18:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:43.004 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:43.004 18:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:11:43.004 18:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:43.004 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:43.004 18:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:11:43.004 18:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:11:43.004 18:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:43.004 18:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:43.004 18:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:43.004 18:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:43.004 18:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:43.004 18:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:43.004 18:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:43.004 18:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:43.004 18:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:11:43.004 18:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:11:43.004 18:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:11:43.004 18:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:11:43.004 18:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:43.004 18:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:43.004 18:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:43.004 18:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:11:43.004 18:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:11:43.004 18:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:11:43.004 18:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:43.004 18:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:43.004 18:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:43.004 18:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:43.004 18:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:11:43.004 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:43.004 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.098 ms 00:11:43.004 00:11:43.004 --- 10.0.0.2 ping statistics --- 00:11:43.005 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:43.005 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:11:43.005 18:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:11:43.005 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:43.005 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.087 ms 00:11:43.005 00:11:43.005 --- 10.0.0.3 ping statistics --- 00:11:43.005 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:43.005 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:11:43.005 18:18:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:43.005 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:43.005 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:11:43.005 00:11:43.005 --- 10.0.0.1 ping statistics --- 00:11:43.005 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:43.005 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:11:43.005 18:18:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:43.005 18:18:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@433 -- # return 0 00:11:43.005 18:18:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:43.005 18:18:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:43.005 18:18:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:43.005 18:18:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:43.005 18:18:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:43.005 18:18:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:43.005 18:18:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:43.264 18:18:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:11:43.264 18:18:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:43.264 18:18:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:43.264 18:18:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:43.264 18:18:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=72802 00:11:43.264 18:18:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:43.264 18:18:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 72802 00:11:43.264 18:18:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 72802 ']' 00:11:43.264 18:18:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:43.264 18:18:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:43.264 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:43.264 18:18:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:43.264 18:18:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:43.264 18:18:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:43.264 [2024-07-22 18:18:55.171611] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:11:43.264 [2024-07-22 18:18:55.171818] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:43.523 [2024-07-22 18:18:55.362326] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:43.781 [2024-07-22 18:18:55.673995] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:43.781 [2024-07-22 18:18:55.674082] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:43.781 [2024-07-22 18:18:55.674101] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:43.781 [2024-07-22 18:18:55.674118] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:43.781 [2024-07-22 18:18:55.674131] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:43.781 [2024-07-22 18:18:55.674190] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:44.347 18:18:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:44.347 18:18:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:11:44.347 18:18:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:44.347 18:18:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:44.347 18:18:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:44.347 18:18:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:44.347 18:18:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:44.347 18:18:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:44.347 18:18:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:44.347 [2024-07-22 18:18:56.203332] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:44.347 18:18:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:44.347 18:18:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:44.347 18:18:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:44.347 18:18:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:44.347 Malloc0 00:11:44.347 18:18:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:44.347 18:18:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:44.347 18:18:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:44.347 18:18:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:44.347 18:18:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:44.347 18:18:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:44.347 18:18:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:44.347 18:18:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:44.347 18:18:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:44.347 18:18:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:44.347 18:18:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:44.347 18:18:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:44.347 [2024-07-22 18:18:56.325217] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:44.347 18:18:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:44.347 18:18:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=72852 00:11:44.348 18:18:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:44.348 18:18:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 72852 /var/tmp/bdevperf.sock 00:11:44.348 18:18:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 72852 ']' 00:11:44.348 18:18:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:44.348 18:18:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:44.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:44.348 18:18:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:44.348 18:18:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:44.348 18:18:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:44.348 18:18:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:11:44.606 [2024-07-22 18:18:56.460367] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:11:44.606 [2024-07-22 18:18:56.460643] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72852 ] 00:11:44.864 [2024-07-22 18:18:56.653978] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:45.123 [2024-07-22 18:18:56.980597] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:45.690 18:18:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:45.690 18:18:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:11:45.690 18:18:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:11:45.690 18:18:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:45.690 18:18:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:45.690 NVMe0n1 00:11:45.690 18:18:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:45.690 18:18:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:11:45.690 Running I/O for 10 seconds... 00:11:57.941 00:11:57.941 Latency(us) 00:11:57.941 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:57.941 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:11:57.941 Verification LBA range: start 0x0 length 0x4000 00:11:57.941 NVMe0n1 : 10.10 6491.55 25.36 0.00 0.00 156962.10 29312.47 106287.48 00:11:57.941 =================================================================================================================== 00:11:57.941 Total : 6491.55 25.36 0.00 0.00 156962.10 29312.47 106287.48 00:11:57.941 0 00:11:57.941 18:19:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 72852 00:11:57.941 18:19:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 72852 ']' 00:11:57.941 18:19:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 72852 00:11:57.941 18:19:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:11:57.941 18:19:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:57.941 18:19:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72852 00:11:57.941 18:19:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:57.941 18:19:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:57.941 killing process with pid 72852 00:11:57.941 18:19:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72852' 00:11:57.941 Received shutdown signal, test time was about 10.000000 seconds 00:11:57.941 00:11:57.941 Latency(us) 00:11:57.941 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:57.941 =================================================================================================================== 00:11:57.941 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:57.941 18:19:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 72852 00:11:57.941 18:19:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 72852 00:11:57.941 18:19:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:11:57.941 18:19:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:11:57.941 18:19:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:57.941 18:19:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:11:57.941 18:19:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:57.941 18:19:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:11:57.941 18:19:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:57.941 18:19:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:57.941 rmmod nvme_tcp 00:11:57.941 rmmod nvme_fabrics 00:11:57.941 rmmod nvme_keyring 00:11:57.941 18:19:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:57.941 18:19:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:11:57.941 18:19:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:11:57.941 18:19:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 72802 ']' 00:11:57.941 18:19:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 72802 00:11:57.941 18:19:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 72802 ']' 00:11:57.941 18:19:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 72802 00:11:57.941 18:19:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:11:57.941 18:19:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:57.941 18:19:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72802 00:11:57.941 18:19:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:57.941 killing process with pid 72802 00:11:57.941 18:19:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:57.941 18:19:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72802' 00:11:57.941 18:19:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 72802 00:11:57.941 18:19:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 72802 00:11:58.877 18:19:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:58.877 18:19:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:58.877 18:19:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:58.877 18:19:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:58.877 18:19:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:58.877 18:19:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:58.877 18:19:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:58.877 18:19:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:58.877 18:19:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:11:58.877 00:11:58.877 real 0m16.190s 00:11:58.877 user 0m26.992s 00:11:58.877 sys 0m2.580s 00:11:58.877 18:19:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:58.877 18:19:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:58.877 ************************************ 00:11:58.877 END TEST nvmf_queue_depth 00:11:58.877 ************************************ 00:11:58.877 18:19:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:11:58.877 18:19:10 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:11:58.877 18:19:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:58.877 18:19:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:58.877 18:19:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:58.877 ************************************ 00:11:58.877 START TEST nvmf_target_multipath 00:11:58.877 ************************************ 00:11:58.877 18:19:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:11:58.877 * Looking for test storage... 00:11:58.877 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:58.877 18:19:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:58.877 18:19:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:11:58.877 18:19:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:58.877 18:19:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:58.877 18:19:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:58.877 18:19:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:58.877 18:19:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:58.877 18:19:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:58.877 18:19:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:58.877 18:19:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:58.877 18:19:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:58.877 18:19:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:58.877 18:19:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:11:58.877 18:19:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=0b8484e2-e129-4a11-8748-0b3c728771da 00:11:58.877 18:19:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:58.877 18:19:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:58.877 18:19:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:58.877 18:19:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:58.877 18:19:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:58.877 18:19:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:58.877 18:19:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:58.877 18:19:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:59.136 18:19:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.136 18:19:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.136 18:19:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.136 18:19:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:11:59.136 18:19:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.136 18:19:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:11:59.136 18:19:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:59.136 18:19:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:59.136 18:19:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:59.136 18:19:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:59.136 18:19:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:59.136 18:19:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:59.136 18:19:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:59.136 18:19:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:59.136 18:19:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:59.136 18:19:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:59.136 18:19:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:11:59.136 18:19:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:59.136 18:19:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:11:59.136 18:19:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:59.136 18:19:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:59.136 18:19:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:59.136 18:19:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:59.136 18:19:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:59.136 18:19:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:59.136 18:19:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:59.136 18:19:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:59.136 18:19:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:11:59.136 18:19:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:11:59.136 18:19:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:11:59.136 18:19:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:11:59.136 18:19:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:11:59.136 18:19:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:11:59.136 18:19:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:59.136 18:19:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:59.136 18:19:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:59.136 18:19:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:11:59.136 18:19:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:59.136 18:19:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:59.136 18:19:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:59.136 18:19:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:59.136 18:19:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:59.136 18:19:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:59.136 18:19:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:59.136 18:19:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:59.136 18:19:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:11:59.136 18:19:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:11:59.136 Cannot find device "nvmf_tgt_br" 00:11:59.137 18:19:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@155 -- # true 00:11:59.137 18:19:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:11:59.137 Cannot find device "nvmf_tgt_br2" 00:11:59.137 18:19:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@156 -- # true 00:11:59.137 18:19:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:11:59.137 18:19:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:11:59.137 Cannot find device "nvmf_tgt_br" 00:11:59.137 18:19:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@158 -- # true 00:11:59.137 18:19:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:11:59.137 Cannot find device "nvmf_tgt_br2" 00:11:59.137 18:19:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@159 -- # true 00:11:59.137 18:19:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:11:59.137 18:19:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:11:59.137 18:19:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:59.137 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:59.137 18:19:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:11:59.137 18:19:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:59.137 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:59.137 18:19:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:11:59.137 18:19:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:11:59.137 18:19:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:59.137 18:19:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:59.137 18:19:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:59.137 18:19:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:59.137 18:19:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:59.395 18:19:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:59.395 18:19:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:59.395 18:19:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:59.395 18:19:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:11:59.395 18:19:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:11:59.395 18:19:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:11:59.395 18:19:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:11:59.395 18:19:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:59.395 18:19:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:59.395 18:19:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:59.395 18:19:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:11:59.395 18:19:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:11:59.395 18:19:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:11:59.395 18:19:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:59.395 18:19:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:59.395 18:19:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:59.395 18:19:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:59.395 18:19:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:11:59.395 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:59.395 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.085 ms 00:11:59.395 00:11:59.395 --- 10.0.0.2 ping statistics --- 00:11:59.396 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:59.396 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:11:59.396 18:19:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:11:59.396 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:59.396 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.084 ms 00:11:59.396 00:11:59.396 --- 10.0.0.3 ping statistics --- 00:11:59.396 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:59.396 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:11:59.396 18:19:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:59.396 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:59.396 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.059 ms 00:11:59.396 00:11:59.396 --- 10.0.0.1 ping statistics --- 00:11:59.396 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:59.396 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:11:59.396 18:19:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:59.396 18:19:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@433 -- # return 0 00:11:59.396 18:19:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:59.396 18:19:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:59.396 18:19:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:59.396 18:19:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:59.396 18:19:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:59.396 18:19:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:59.396 18:19:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:59.396 18:19:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:11:59.396 18:19:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:11:59.396 18:19:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:11:59.396 18:19:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:59.396 18:19:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:59.396 18:19:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:59.396 18:19:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@481 -- # nvmfpid=73214 00:11:59.396 18:19:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # waitforlisten 73214 00:11:59.396 18:19:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@829 -- # '[' -z 73214 ']' 00:11:59.396 18:19:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:59.396 18:19:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:59.396 18:19:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:59.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:59.396 18:19:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:59.396 18:19:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:59.396 18:19:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:59.654 [2024-07-22 18:19:11.477087] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:11:59.654 [2024-07-22 18:19:11.477302] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:59.654 [2024-07-22 18:19:11.664968] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:00.221 [2024-07-22 18:19:11.971798] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:00.221 [2024-07-22 18:19:11.971927] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:00.221 [2024-07-22 18:19:11.971947] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:00.221 [2024-07-22 18:19:11.971962] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:00.221 [2024-07-22 18:19:11.971975] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:00.221 [2024-07-22 18:19:11.972275] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:00.221 [2024-07-22 18:19:11.972603] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:00.221 [2024-07-22 18:19:11.972627] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:00.221 [2024-07-22 18:19:11.973373] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:00.479 18:19:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:00.479 18:19:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@862 -- # return 0 00:12:00.479 18:19:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:00.479 18:19:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:00.479 18:19:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:12:00.479 18:19:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:00.479 18:19:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:00.737 [2024-07-22 18:19:12.695909] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:00.737 18:19:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:12:01.303 Malloc0 00:12:01.303 18:19:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:12:01.303 18:19:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:01.561 18:19:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:01.819 [2024-07-22 18:19:13.773308] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:01.819 18:19:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:12:02.078 [2024-07-22 18:19:14.013649] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:02.078 18:19:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --hostid=0b8484e2-e129-4a11-8748-0b3c728771da -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:12:02.336 18:19:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --hostid=0b8484e2-e129-4a11-8748-0b3c728771da -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:12:02.595 18:19:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:12:02.595 18:19:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1198 -- # local i=0 00:12:02.595 18:19:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:02.595 18:19:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:02.595 18:19:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1205 -- # sleep 2 00:12:04.497 18:19:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:04.497 18:19:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:04.497 18:19:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:04.497 18:19:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:04.497 18:19:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:04.497 18:19:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # return 0 00:12:04.497 18:19:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:12:04.497 18:19:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:12:04.497 18:19:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:12:04.497 18:19:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:12:04.497 18:19:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:12:04.497 18:19:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:12:04.497 18:19:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:12:04.497 18:19:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:12:04.497 18:19:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:12:04.497 18:19:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:12:04.497 18:19:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:12:04.497 18:19:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:12:04.497 18:19:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:12:04.497 18:19:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:12:04.497 18:19:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:12:04.497 18:19:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:12:04.497 18:19:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:12:04.497 18:19:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:12:04.497 18:19:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:12:04.497 18:19:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:12:04.497 18:19:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:12:04.497 18:19:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:12:04.497 18:19:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:12:04.497 18:19:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:12:04.755 18:19:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:12:04.755 18:19:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:12:04.755 18:19:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=73357 00:12:04.755 18:19:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:12:04.755 18:19:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:12:04.755 [global] 00:12:04.755 thread=1 00:12:04.755 invalidate=1 00:12:04.755 rw=randrw 00:12:04.755 time_based=1 00:12:04.755 runtime=6 00:12:04.755 ioengine=libaio 00:12:04.755 direct=1 00:12:04.755 bs=4096 00:12:04.755 iodepth=128 00:12:04.755 norandommap=0 00:12:04.755 numjobs=1 00:12:04.755 00:12:04.755 verify_dump=1 00:12:04.755 verify_backlog=512 00:12:04.755 verify_state_save=0 00:12:04.755 do_verify=1 00:12:04.755 verify=crc32c-intel 00:12:04.755 [job0] 00:12:04.755 filename=/dev/nvme0n1 00:12:04.755 Could not set queue depth (nvme0n1) 00:12:04.755 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:04.755 fio-3.35 00:12:04.755 Starting 1 thread 00:12:05.688 18:19:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:12:05.946 18:19:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:12:06.204 18:19:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:12:06.204 18:19:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:12:06.204 18:19:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:12:06.204 18:19:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:12:06.204 18:19:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:12:06.204 18:19:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:12:06.204 18:19:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:12:06.204 18:19:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:12:06.204 18:19:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:12:06.204 18:19:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:12:06.204 18:19:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:12:06.204 18:19:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:12:06.204 18:19:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:12:07.139 18:19:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:12:07.139 18:19:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:12:07.139 18:19:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:12:07.139 18:19:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:12:07.396 18:19:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:12:07.654 18:19:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:12:07.654 18:19:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:12:07.654 18:19:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:12:07.654 18:19:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:12:07.654 18:19:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:12:07.654 18:19:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:12:07.654 18:19:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:12:07.654 18:19:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:12:07.654 18:19:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:12:07.654 18:19:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:12:07.654 18:19:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:12:07.654 18:19:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:12:07.654 18:19:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:12:09.030 18:19:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:12:09.030 18:19:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:12:09.030 18:19:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:12:09.030 18:19:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 73357 00:12:10.931 00:12:10.931 job0: (groupid=0, jobs=1): err= 0: pid=73378: Mon Jul 22 18:19:22 2024 00:12:10.931 read: IOPS=7606, BW=29.7MiB/s (31.2MB/s)(179MiB/6008msec) 00:12:10.931 slat (usec): min=4, max=6736, avg=76.12, stdev=345.33 00:12:10.931 clat (usec): min=2682, max=26776, avg=11282.41, stdev=2061.28 00:12:10.931 lat (usec): min=2783, max=26793, avg=11358.52, stdev=2074.81 00:12:10.931 clat percentiles (usec): 00:12:10.931 | 1.00th=[ 6652], 5.00th=[ 8455], 10.00th=[ 9503], 20.00th=[ 9896], 00:12:10.931 | 30.00th=[10290], 40.00th=[10552], 50.00th=[10945], 60.00th=[11469], 00:12:10.931 | 70.00th=[11994], 80.00th=[12518], 90.00th=[13566], 95.00th=[15270], 00:12:10.931 | 99.00th=[17957], 99.50th=[19006], 99.90th=[24249], 99.95th=[25560], 00:12:10.931 | 99.99th=[26608] 00:12:10.931 bw ( KiB/s): min= 8032, max=20632, per=54.57%, avg=16602.00, stdev=4246.36, samples=11 00:12:10.931 iops : min= 2008, max= 5158, avg=4150.45, stdev=1061.58, samples=11 00:12:10.931 write: IOPS=4519, BW=17.7MiB/s (18.5MB/s)(98.0MiB/5551msec); 0 zone resets 00:12:10.931 slat (usec): min=4, max=3502, avg=89.51, stdev=254.50 00:12:10.931 clat (usec): min=2599, max=26690, avg=9831.82, stdev=1738.94 00:12:10.931 lat (usec): min=2627, max=26717, avg=9921.33, stdev=1751.11 00:12:10.931 clat percentiles (usec): 00:12:10.931 | 1.00th=[ 5211], 5.00th=[ 7373], 10.00th=[ 8225], 20.00th=[ 8848], 00:12:10.931 | 30.00th=[ 9110], 40.00th=[ 9503], 50.00th=[ 9765], 60.00th=[10028], 00:12:10.931 | 70.00th=[10290], 80.00th=[10683], 90.00th=[11338], 95.00th=[12780], 00:12:10.931 | 99.00th=[15926], 99.50th=[17171], 99.90th=[19530], 99.95th=[20317], 00:12:10.931 | 99.99th=[21627] 00:12:10.931 bw ( KiB/s): min= 8192, max=20480, per=91.80%, avg=16595.45, stdev=4038.31, samples=11 00:12:10.931 iops : min= 2048, max= 5120, avg=4148.82, stdev=1009.57, samples=11 00:12:10.931 lat (msec) : 4=0.03%, 10=35.98%, 20=63.77%, 50=0.22% 00:12:10.931 cpu : usr=4.66%, sys=18.55%, ctx=4445, majf=0, minf=96 00:12:10.931 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:12:10.931 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:10.931 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:10.931 issued rwts: total=45698,25088,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:10.931 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:10.931 00:12:10.931 Run status group 0 (all jobs): 00:12:10.931 READ: bw=29.7MiB/s (31.2MB/s), 29.7MiB/s-29.7MiB/s (31.2MB/s-31.2MB/s), io=179MiB (187MB), run=6008-6008msec 00:12:10.931 WRITE: bw=17.7MiB/s (18.5MB/s), 17.7MiB/s-17.7MiB/s (18.5MB/s-18.5MB/s), io=98.0MiB (103MB), run=5551-5551msec 00:12:10.931 00:12:10.931 Disk stats (read/write): 00:12:10.931 nvme0n1: ios=45070/24576, merge=0/0, ticks=481786/227905, in_queue=709691, util=98.75% 00:12:10.931 18:19:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:12:11.190 18:19:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:12:11.447 18:19:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:12:11.447 18:19:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:12:11.447 18:19:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:12:11.447 18:19:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:12:11.447 18:19:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:12:11.447 18:19:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:12:11.447 18:19:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:12:11.447 18:19:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:12:11.447 18:19:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:12:11.447 18:19:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:12:11.447 18:19:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:12:11.447 18:19:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \o\p\t\i\m\i\z\e\d ]] 00:12:11.447 18:19:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:12:12.845 18:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:12:12.845 18:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:12:12.845 18:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:12:12.845 18:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:12:12.845 18:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=73504 00:12:12.845 18:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:12:12.845 18:19:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:12:12.845 [global] 00:12:12.845 thread=1 00:12:12.845 invalidate=1 00:12:12.845 rw=randrw 00:12:12.845 time_based=1 00:12:12.845 runtime=6 00:12:12.845 ioengine=libaio 00:12:12.845 direct=1 00:12:12.845 bs=4096 00:12:12.845 iodepth=128 00:12:12.845 norandommap=0 00:12:12.845 numjobs=1 00:12:12.845 00:12:12.845 verify_dump=1 00:12:12.845 verify_backlog=512 00:12:12.845 verify_state_save=0 00:12:12.845 do_verify=1 00:12:12.845 verify=crc32c-intel 00:12:12.845 [job0] 00:12:12.845 filename=/dev/nvme0n1 00:12:12.845 Could not set queue depth (nvme0n1) 00:12:12.845 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:12.845 fio-3.35 00:12:12.845 Starting 1 thread 00:12:13.778 18:19:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:12:13.778 18:19:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:12:14.345 18:19:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:12:14.345 18:19:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:12:14.345 18:19:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:12:14.345 18:19:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:12:14.345 18:19:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:12:14.345 18:19:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:12:14.345 18:19:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:12:14.345 18:19:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:12:14.345 18:19:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:12:14.345 18:19:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:12:14.345 18:19:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:12:14.345 18:19:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:12:14.345 18:19:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:12:15.301 18:19:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:12:15.301 18:19:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:12:15.301 18:19:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:12:15.301 18:19:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:12:15.559 18:19:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:12:15.816 18:19:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:12:15.816 18:19:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:12:15.816 18:19:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:12:15.816 18:19:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:12:15.816 18:19:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:12:15.816 18:19:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:12:15.816 18:19:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:12:15.816 18:19:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:12:15.816 18:19:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:12:15.816 18:19:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:12:15.816 18:19:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:12:15.816 18:19:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:12:15.816 18:19:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:12:16.751 18:19:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:12:16.751 18:19:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:12:16.751 18:19:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:12:16.751 18:19:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 73504 00:12:19.284 00:12:19.284 job0: (groupid=0, jobs=1): err= 0: pid=73531: Mon Jul 22 18:19:30 2024 00:12:19.284 read: IOPS=8746, BW=34.2MiB/s (35.8MB/s)(205MiB/6005msec) 00:12:19.284 slat (usec): min=6, max=7461, avg=60.02, stdev=309.59 00:12:19.284 clat (usec): min=433, max=28621, avg=10229.02, stdev=2765.23 00:12:19.284 lat (usec): min=466, max=28636, avg=10289.04, stdev=2787.98 00:12:19.284 clat percentiles (usec): 00:12:19.284 | 1.00th=[ 2442], 5.00th=[ 5538], 10.00th=[ 6587], 20.00th=[ 8291], 00:12:19.284 | 30.00th=[ 9765], 40.00th=[10159], 50.00th=[10421], 60.00th=[10683], 00:12:19.284 | 70.00th=[11207], 80.00th=[11994], 90.00th=[12911], 95.00th=[14222], 00:12:19.284 | 99.00th=[18744], 99.50th=[20841], 99.90th=[24249], 99.95th=[25297], 00:12:19.284 | 99.99th=[26346] 00:12:19.284 bw ( KiB/s): min= 5848, max=29984, per=53.73%, avg=18800.00, stdev=6753.05, samples=11 00:12:19.284 iops : min= 1462, max= 7496, avg=4700.00, stdev=1688.26, samples=11 00:12:19.284 write: IOPS=5257, BW=20.5MiB/s (21.5MB/s)(103MiB/5016msec); 0 zone resets 00:12:19.284 slat (usec): min=14, max=15790, avg=69.57, stdev=221.48 00:12:19.284 clat (usec): min=358, max=24874, avg=8532.05, stdev=2875.96 00:12:19.284 lat (usec): min=401, max=24899, avg=8601.62, stdev=2893.13 00:12:19.284 clat percentiles (usec): 00:12:19.284 | 1.00th=[ 1483], 5.00th=[ 3687], 10.00th=[ 4555], 20.00th=[ 5932], 00:12:19.284 | 30.00th=[ 7439], 40.00th=[ 8586], 50.00th=[ 9110], 60.00th=[ 9503], 00:12:19.284 | 70.00th=[ 9896], 80.00th=[10290], 90.00th=[10945], 95.00th=[11731], 00:12:19.284 | 99.00th=[17171], 99.50th=[18220], 99.90th=[22676], 99.95th=[23462], 00:12:19.284 | 99.99th=[23725] 00:12:19.284 bw ( KiB/s): min= 6192, max=31104, per=89.43%, avg=18808.73, stdev=6656.15, samples=11 00:12:19.284 iops : min= 1548, max= 7776, avg=4702.18, stdev=1664.04, samples=11 00:12:19.284 lat (usec) : 500=0.01%, 750=0.05%, 1000=0.09% 00:12:19.284 lat (msec) : 2=0.93%, 4=2.35%, 10=44.85%, 20=51.19%, 50=0.52% 00:12:19.284 cpu : usr=5.36%, sys=19.72%, ctx=5196, majf=0, minf=121 00:12:19.284 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:12:19.284 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:19.284 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:19.284 issued rwts: total=52525,26374,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:19.284 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:19.284 00:12:19.284 Run status group 0 (all jobs): 00:12:19.284 READ: bw=34.2MiB/s (35.8MB/s), 34.2MiB/s-34.2MiB/s (35.8MB/s-35.8MB/s), io=205MiB (215MB), run=6005-6005msec 00:12:19.284 WRITE: bw=20.5MiB/s (21.5MB/s), 20.5MiB/s-20.5MiB/s (21.5MB/s-21.5MB/s), io=103MiB (108MB), run=5016-5016msec 00:12:19.284 00:12:19.284 Disk stats (read/write): 00:12:19.284 nvme0n1: ios=51945/25922, merge=0/0, ticks=500890/207099, in_queue=707989, util=98.63% 00:12:19.284 18:19:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:19.284 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:12:19.284 18:19:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:19.284 18:19:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1219 -- # local i=0 00:12:19.284 18:19:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:19.284 18:19:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:19.284 18:19:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:19.284 18:19:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:19.284 18:19:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # return 0 00:12:19.284 18:19:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:19.284 18:19:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:12:19.284 18:19:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:12:19.284 18:19:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:12:19.284 18:19:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:12:19.284 18:19:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:19.284 18:19:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:12:19.284 18:19:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:19.284 18:19:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:12:19.284 18:19:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:19.284 18:19:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:19.284 rmmod nvme_tcp 00:12:19.284 rmmod nvme_fabrics 00:12:19.284 rmmod nvme_keyring 00:12:19.284 18:19:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:19.284 18:19:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:12:19.284 18:19:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:12:19.284 18:19:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n 73214 ']' 00:12:19.284 18:19:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@490 -- # killprocess 73214 00:12:19.284 18:19:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@948 -- # '[' -z 73214 ']' 00:12:19.284 18:19:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@952 -- # kill -0 73214 00:12:19.284 18:19:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@953 -- # uname 00:12:19.284 18:19:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:19.284 18:19:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73214 00:12:19.543 killing process with pid 73214 00:12:19.543 18:19:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:19.543 18:19:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:19.543 18:19:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73214' 00:12:19.543 18:19:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@967 -- # kill 73214 00:12:19.543 18:19:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@972 -- # wait 73214 00:12:20.920 18:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:20.920 18:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:20.920 18:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:20.920 18:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:20.920 18:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:20.920 18:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:20.920 18:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:20.920 18:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:20.920 18:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:12:20.920 ************************************ 00:12:20.920 END TEST nvmf_target_multipath 00:12:20.920 ************************************ 00:12:20.920 00:12:20.920 real 0m22.089s 00:12:20.920 user 1m23.894s 00:12:20.920 sys 0m6.040s 00:12:20.920 18:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:20.920 18:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:12:20.920 18:19:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:12:20.920 18:19:32 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:12:20.920 18:19:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:20.920 18:19:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:20.920 18:19:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:20.920 ************************************ 00:12:20.920 START TEST nvmf_zcopy 00:12:20.920 ************************************ 00:12:20.920 18:19:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:12:21.179 * Looking for test storage... 00:12:21.179 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:21.179 18:19:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:21.179 18:19:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:12:21.179 18:19:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:21.179 18:19:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:21.180 18:19:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:21.180 18:19:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:21.180 18:19:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:21.180 18:19:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:21.180 18:19:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:21.180 18:19:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:21.180 18:19:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:21.180 18:19:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:21.180 18:19:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:12:21.180 18:19:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=0b8484e2-e129-4a11-8748-0b3c728771da 00:12:21.180 18:19:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:21.180 18:19:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:21.180 18:19:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:21.180 18:19:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:21.180 18:19:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:21.180 18:19:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:21.180 18:19:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:21.180 18:19:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:21.180 18:19:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.180 18:19:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.180 18:19:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.180 18:19:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:12:21.180 18:19:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.180 18:19:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:12:21.180 18:19:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:21.180 18:19:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:21.180 18:19:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:21.180 18:19:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:21.180 18:19:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:21.180 18:19:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:21.180 18:19:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:21.180 18:19:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:21.180 18:19:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:12:21.180 18:19:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:21.180 18:19:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:21.180 18:19:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:21.180 18:19:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:21.180 18:19:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:21.180 18:19:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:21.180 18:19:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:21.180 18:19:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:21.180 18:19:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:12:21.180 18:19:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:12:21.180 18:19:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:12:21.180 18:19:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:12:21.180 18:19:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:12:21.180 18:19:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # nvmf_veth_init 00:12:21.180 18:19:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:21.180 18:19:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:21.180 18:19:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:21.180 18:19:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:12:21.180 18:19:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:21.180 18:19:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:21.180 18:19:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:21.180 18:19:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:21.180 18:19:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:21.180 18:19:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:21.180 18:19:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:21.180 18:19:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:21.180 18:19:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:12:21.180 18:19:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:12:21.180 Cannot find device "nvmf_tgt_br" 00:12:21.180 18:19:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@155 -- # true 00:12:21.180 18:19:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:12:21.180 Cannot find device "nvmf_tgt_br2" 00:12:21.180 18:19:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # true 00:12:21.180 18:19:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:12:21.180 18:19:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:12:21.180 Cannot find device "nvmf_tgt_br" 00:12:21.180 18:19:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@158 -- # true 00:12:21.180 18:19:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:12:21.180 Cannot find device "nvmf_tgt_br2" 00:12:21.180 18:19:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # true 00:12:21.180 18:19:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:12:21.180 18:19:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:12:21.180 18:19:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:21.442 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:21.442 18:19:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:12:21.443 18:19:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:21.443 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:21.443 18:19:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:12:21.443 18:19:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:12:21.443 18:19:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:21.443 18:19:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:21.443 18:19:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:21.443 18:19:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:21.443 18:19:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:21.443 18:19:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:21.443 18:19:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:21.443 18:19:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:21.443 18:19:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:12:21.443 18:19:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:12:21.443 18:19:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:12:21.443 18:19:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:12:21.443 18:19:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:21.443 18:19:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:21.443 18:19:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:21.443 18:19:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:12:21.443 18:19:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:12:21.443 18:19:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:12:21.443 18:19:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:21.443 18:19:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:21.443 18:19:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:21.443 18:19:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:21.443 18:19:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:12:21.443 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:21.443 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.088 ms 00:12:21.443 00:12:21.443 --- 10.0.0.2 ping statistics --- 00:12:21.443 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:21.443 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:12:21.443 18:19:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:12:21.444 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:21.444 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:12:21.444 00:12:21.444 --- 10.0.0.3 ping statistics --- 00:12:21.444 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:21.444 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:12:21.444 18:19:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:21.444 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:21.444 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.051 ms 00:12:21.444 00:12:21.444 --- 10.0.0.1 ping statistics --- 00:12:21.444 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:21.444 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:12:21.444 18:19:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:21.444 18:19:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@433 -- # return 0 00:12:21.444 18:19:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:21.444 18:19:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:21.444 18:19:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:21.444 18:19:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:21.444 18:19:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:21.444 18:19:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:21.444 18:19:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:21.444 18:19:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:12:21.444 18:19:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:21.444 18:19:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:21.444 18:19:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:21.444 18:19:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=73822 00:12:21.444 18:19:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:21.444 18:19:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 73822 00:12:21.444 18:19:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 73822 ']' 00:12:21.444 18:19:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:21.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:21.444 18:19:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:21.444 18:19:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:21.444 18:19:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:21.444 18:19:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:21.706 [2024-07-22 18:19:33.547927] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:12:21.706 [2024-07-22 18:19:33.548404] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:21.964 [2024-07-22 18:19:33.733875] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:22.223 [2024-07-22 18:19:34.052835] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:22.223 [2024-07-22 18:19:34.053250] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:22.223 [2024-07-22 18:19:34.053282] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:22.223 [2024-07-22 18:19:34.053302] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:22.223 [2024-07-22 18:19:34.053316] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:22.223 [2024-07-22 18:19:34.053395] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:22.480 18:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:22.480 18:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:12:22.480 18:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:22.480 18:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:22.480 18:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:22.740 18:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:22.740 18:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:12:22.740 18:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:12:22.740 18:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.740 18:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:22.740 [2024-07-22 18:19:34.553303] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:22.740 18:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.740 18:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:22.740 18:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.740 18:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:22.740 18:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.740 18:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:22.740 18:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.740 18:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:22.740 [2024-07-22 18:19:34.569789] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:22.740 18:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.740 18:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:22.740 18:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.740 18:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:22.740 18:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.740 18:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:12:22.740 18:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.740 18:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:22.740 malloc0 00:12:22.740 18:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.740 18:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:12:22.740 18:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.740 18:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:22.740 18:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.740 18:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:12:22.740 18:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:12:22.740 18:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:12:22.740 18:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:12:22.740 18:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:22.740 18:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:22.740 { 00:12:22.740 "params": { 00:12:22.740 "name": "Nvme$subsystem", 00:12:22.740 "trtype": "$TEST_TRANSPORT", 00:12:22.740 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:22.740 "adrfam": "ipv4", 00:12:22.740 "trsvcid": "$NVMF_PORT", 00:12:22.740 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:22.740 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:22.740 "hdgst": ${hdgst:-false}, 00:12:22.740 "ddgst": ${ddgst:-false} 00:12:22.740 }, 00:12:22.740 "method": "bdev_nvme_attach_controller" 00:12:22.740 } 00:12:22.740 EOF 00:12:22.740 )") 00:12:22.740 18:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:12:22.740 18:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:12:22.740 18:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:12:22.740 18:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:22.740 "params": { 00:12:22.740 "name": "Nvme1", 00:12:22.740 "trtype": "tcp", 00:12:22.740 "traddr": "10.0.0.2", 00:12:22.740 "adrfam": "ipv4", 00:12:22.740 "trsvcid": "4420", 00:12:22.740 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:22.740 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:22.740 "hdgst": false, 00:12:22.740 "ddgst": false 00:12:22.740 }, 00:12:22.740 "method": "bdev_nvme_attach_controller" 00:12:22.740 }' 00:12:22.999 [2024-07-22 18:19:34.791724] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:12:22.999 [2024-07-22 18:19:34.792035] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73873 ] 00:12:22.999 [2024-07-22 18:19:34.967189] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:23.257 [2024-07-22 18:19:35.245720] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:23.825 Running I/O for 10 seconds... 00:12:33.796 00:12:33.796 Latency(us) 00:12:33.796 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:33.796 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:12:33.796 Verification LBA range: start 0x0 length 0x1000 00:12:33.796 Nvme1n1 : 10.02 4320.10 33.75 0.00 0.00 29547.47 4289.63 41704.73 00:12:33.796 =================================================================================================================== 00:12:33.796 Total : 4320.10 33.75 0.00 0.00 29547.47 4289.63 41704.73 00:12:35.174 18:19:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=74009 00:12:35.174 18:19:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:12:35.174 18:19:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:35.174 18:19:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:12:35.174 18:19:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:12:35.174 18:19:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:12:35.174 18:19:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:12:35.174 18:19:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:35.174 18:19:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:35.174 { 00:12:35.174 "params": { 00:12:35.174 "name": "Nvme$subsystem", 00:12:35.174 "trtype": "$TEST_TRANSPORT", 00:12:35.174 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:35.174 "adrfam": "ipv4", 00:12:35.174 "trsvcid": "$NVMF_PORT", 00:12:35.174 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:35.174 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:35.174 "hdgst": ${hdgst:-false}, 00:12:35.174 "ddgst": ${ddgst:-false} 00:12:35.174 }, 00:12:35.174 "method": "bdev_nvme_attach_controller" 00:12:35.174 } 00:12:35.174 EOF 00:12:35.174 )") 00:12:35.174 18:19:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:12:35.174 [2024-07-22 18:19:46.984708] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.174 [2024-07-22 18:19:46.984783] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.174 18:19:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:12:35.174 2024/07/22 18:19:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:35.174 18:19:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:12:35.174 18:19:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:35.174 "params": { 00:12:35.174 "name": "Nvme1", 00:12:35.174 "trtype": "tcp", 00:12:35.174 "traddr": "10.0.0.2", 00:12:35.174 "adrfam": "ipv4", 00:12:35.174 "trsvcid": "4420", 00:12:35.174 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:35.174 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:35.174 "hdgst": false, 00:12:35.174 "ddgst": false 00:12:35.174 }, 00:12:35.174 "method": "bdev_nvme_attach_controller" 00:12:35.174 }' 00:12:35.174 [2024-07-22 18:19:46.996836] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.174 [2024-07-22 18:19:46.996939] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.174 2024/07/22 18:19:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:35.174 [2024-07-22 18:19:47.004607] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.174 [2024-07-22 18:19:47.004649] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.174 2024/07/22 18:19:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:35.174 [2024-07-22 18:19:47.016636] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.174 [2024-07-22 18:19:47.016686] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.174 2024/07/22 18:19:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:35.174 [2024-07-22 18:19:47.028639] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.174 [2024-07-22 18:19:47.028704] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.174 2024/07/22 18:19:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:35.174 [2024-07-22 18:19:47.040690] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.174 [2024-07-22 18:19:47.040733] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.174 2024/07/22 18:19:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:35.174 [2024-07-22 18:19:47.052645] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.174 [2024-07-22 18:19:47.052707] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.174 2024/07/22 18:19:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:35.174 [2024-07-22 18:19:47.064650] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.174 [2024-07-22 18:19:47.064691] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.174 2024/07/22 18:19:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:35.174 [2024-07-22 18:19:47.076712] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.174 [2024-07-22 18:19:47.076759] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.174 2024/07/22 18:19:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:35.174 [2024-07-22 18:19:47.088772] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.174 [2024-07-22 18:19:47.088828] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.174 2024/07/22 18:19:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:35.174 [2024-07-22 18:19:47.100674] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.174 [2024-07-22 18:19:47.100721] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.174 2024/07/22 18:19:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:35.174 [2024-07-22 18:19:47.112672] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.174 [2024-07-22 18:19:47.112722] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.174 [2024-07-22 18:19:47.116686] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:12:35.174 [2024-07-22 18:19:47.116881] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match2024/07/22 18:19:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:35.175 -allocations --file-prefix=spdk_pid74009 ] 00:12:35.175 [2024-07-22 18:19:47.124710] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.175 [2024-07-22 18:19:47.124760] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.175 2024/07/22 18:19:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:35.175 [2024-07-22 18:19:47.136679] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.175 [2024-07-22 18:19:47.136725] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.175 2024/07/22 18:19:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:35.175 [2024-07-22 18:19:47.148684] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.175 [2024-07-22 18:19:47.148737] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.175 2024/07/22 18:19:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:35.175 [2024-07-22 18:19:47.160717] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.175 [2024-07-22 18:19:47.160754] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.175 2024/07/22 18:19:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:35.175 [2024-07-22 18:19:47.172682] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.175 [2024-07-22 18:19:47.172717] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.175 2024/07/22 18:19:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:35.175 [2024-07-22 18:19:47.184702] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.175 [2024-07-22 18:19:47.184740] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.175 2024/07/22 18:19:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:35.435 [2024-07-22 18:19:47.196723] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.435 [2024-07-22 18:19:47.196759] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.435 2024/07/22 18:19:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:35.435 [2024-07-22 18:19:47.208738] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.435 [2024-07-22 18:19:47.208783] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.435 2024/07/22 18:19:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:35.435 [2024-07-22 18:19:47.220733] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.435 [2024-07-22 18:19:47.220771] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.435 2024/07/22 18:19:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:35.435 [2024-07-22 18:19:47.232777] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.435 [2024-07-22 18:19:47.232824] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.435 2024/07/22 18:19:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:35.435 [2024-07-22 18:19:47.244805] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.435 [2024-07-22 18:19:47.244860] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.435 2024/07/22 18:19:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:35.435 [2024-07-22 18:19:47.256798] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.435 [2024-07-22 18:19:47.256862] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.435 2024/07/22 18:19:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:35.435 [2024-07-22 18:19:47.268778] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.435 [2024-07-22 18:19:47.268868] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.435 2024/07/22 18:19:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:35.435 [2024-07-22 18:19:47.280781] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.435 [2024-07-22 18:19:47.280820] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.435 2024/07/22 18:19:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:35.435 [2024-07-22 18:19:47.292759] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.435 [2024-07-22 18:19:47.292798] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.435 [2024-07-22 18:19:47.294442] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:35.435 2024/07/22 18:19:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:35.435 [2024-07-22 18:19:47.304858] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.435 [2024-07-22 18:19:47.304901] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.436 2024/07/22 18:19:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:35.436 [2024-07-22 18:19:47.316865] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.436 [2024-07-22 18:19:47.316915] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.436 2024/07/22 18:19:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:35.436 [2024-07-22 18:19:47.328781] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.436 [2024-07-22 18:19:47.328822] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.436 2024/07/22 18:19:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:35.436 [2024-07-22 18:19:47.340827] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.436 [2024-07-22 18:19:47.340876] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.436 2024/07/22 18:19:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:35.436 [2024-07-22 18:19:47.352893] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.436 [2024-07-22 18:19:47.352951] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.436 2024/07/22 18:19:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:35.436 [2024-07-22 18:19:47.364894] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.436 [2024-07-22 18:19:47.364959] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.436 2024/07/22 18:19:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:35.436 [2024-07-22 18:19:47.376930] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.436 [2024-07-22 18:19:47.376978] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.436 2024/07/22 18:19:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:35.436 [2024-07-22 18:19:47.388890] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.436 [2024-07-22 18:19:47.388934] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.436 2024/07/22 18:19:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:35.436 [2024-07-22 18:19:47.400909] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.436 [2024-07-22 18:19:47.400955] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.436 2024/07/22 18:19:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:35.436 [2024-07-22 18:19:47.412896] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.436 [2024-07-22 18:19:47.412941] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.436 2024/07/22 18:19:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:35.436 [2024-07-22 18:19:47.424888] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.436 [2024-07-22 18:19:47.424937] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.436 2024/07/22 18:19:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:35.436 [2024-07-22 18:19:47.436992] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.436 [2024-07-22 18:19:47.437043] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.436 2024/07/22 18:19:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:35.436 [2024-07-22 18:19:47.448973] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.436 [2024-07-22 18:19:47.449039] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.697 2024/07/22 18:19:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:35.697 [2024-07-22 18:19:47.460907] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.697 [2024-07-22 18:19:47.460949] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.697 2024/07/22 18:19:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:35.697 [2024-07-22 18:19:47.472921] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.697 [2024-07-22 18:19:47.472959] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.697 2024/07/22 18:19:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:35.697 [2024-07-22 18:19:47.484892] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.697 [2024-07-22 18:19:47.484939] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.697 2024/07/22 18:19:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:35.697 [2024-07-22 18:19:47.496918] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.697 [2024-07-22 18:19:47.496956] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.697 2024/07/22 18:19:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:35.697 [2024-07-22 18:19:47.508916] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.697 [2024-07-22 18:19:47.508953] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.697 2024/07/22 18:19:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:35.697 [2024-07-22 18:19:47.520930] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.697 [2024-07-22 18:19:47.520966] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.697 2024/07/22 18:19:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:35.697 [2024-07-22 18:19:47.533015] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.697 [2024-07-22 18:19:47.533052] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.697 2024/07/22 18:19:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:35.697 [2024-07-22 18:19:47.544955] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.697 [2024-07-22 18:19:47.544994] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.697 2024/07/22 18:19:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:35.697 [2024-07-22 18:19:47.556926] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.697 [2024-07-22 18:19:47.556964] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.697 2024/07/22 18:19:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:35.697 [2024-07-22 18:19:47.568942] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.697 [2024-07-22 18:19:47.568981] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.697 2024/07/22 18:19:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:35.697 [2024-07-22 18:19:47.574034] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:35.697 [2024-07-22 18:19:47.580978] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.697 [2024-07-22 18:19:47.581039] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.697 2024/07/22 18:19:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:35.698 [2024-07-22 18:19:47.592985] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.698 [2024-07-22 18:19:47.593032] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.698 2024/07/22 18:19:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:35.698 [2024-07-22 18:19:47.605008] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.698 [2024-07-22 18:19:47.605060] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.698 2024/07/22 18:19:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:35.698 [2024-07-22 18:19:47.616995] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.698 [2024-07-22 18:19:47.617050] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.698 2024/07/22 18:19:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:35.698 [2024-07-22 18:19:47.629033] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.698 [2024-07-22 18:19:47.629099] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.698 2024/07/22 18:19:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:35.698 [2024-07-22 18:19:47.641010] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.698 [2024-07-22 18:19:47.641052] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.698 2024/07/22 18:19:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:35.698 [2024-07-22 18:19:47.652959] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.698 [2024-07-22 18:19:47.652996] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.698 2024/07/22 18:19:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:35.698 [2024-07-22 18:19:47.665025] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.698 [2024-07-22 18:19:47.665074] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.698 2024/07/22 18:19:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:35.698 [2024-07-22 18:19:47.676995] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.698 [2024-07-22 18:19:47.677043] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.698 2024/07/22 18:19:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:35.698 [2024-07-22 18:19:47.689115] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.698 [2024-07-22 18:19:47.689182] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.698 2024/07/22 18:19:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:35.698 [2024-07-22 18:19:47.701134] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.698 [2024-07-22 18:19:47.701253] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.698 2024/07/22 18:19:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:35.698 [2024-07-22 18:19:47.713045] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.698 [2024-07-22 18:19:47.713095] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.958 2024/07/22 18:19:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:35.958 [2024-07-22 18:19:47.725014] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.958 [2024-07-22 18:19:47.725052] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.958 2024/07/22 18:19:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:35.958 [2024-07-22 18:19:47.737009] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.958 [2024-07-22 18:19:47.737046] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.958 2024/07/22 18:19:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:35.958 [2024-07-22 18:19:47.749007] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.958 [2024-07-22 18:19:47.749043] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.958 2024/07/22 18:19:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:35.958 [2024-07-22 18:19:47.761030] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.958 [2024-07-22 18:19:47.761068] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.958 2024/07/22 18:19:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:35.958 [2024-07-22 18:19:47.773013] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.958 [2024-07-22 18:19:47.773051] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.958 2024/07/22 18:19:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:35.958 [2024-07-22 18:19:47.785057] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.958 [2024-07-22 18:19:47.785112] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.958 2024/07/22 18:19:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:35.958 [2024-07-22 18:19:47.797048] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.958 [2024-07-22 18:19:47.797086] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.958 2024/07/22 18:19:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:35.958 [2024-07-22 18:19:47.809088] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.958 [2024-07-22 18:19:47.809130] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.958 2024/07/22 18:19:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:35.958 [2024-07-22 18:19:47.821178] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.958 [2024-07-22 18:19:47.821264] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.958 2024/07/22 18:19:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:35.958 [2024-07-22 18:19:47.833202] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.958 [2024-07-22 18:19:47.833277] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.958 2024/07/22 18:19:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:35.958 [2024-07-22 18:19:47.841184] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.958 [2024-07-22 18:19:47.841266] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.958 2024/07/22 18:19:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:35.958 [2024-07-22 18:19:47.849089] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.958 [2024-07-22 18:19:47.849134] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.959 2024/07/22 18:19:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:35.959 [2024-07-22 18:19:47.857070] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.959 [2024-07-22 18:19:47.857111] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.959 2024/07/22 18:19:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:35.959 [2024-07-22 18:19:47.869153] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.959 [2024-07-22 18:19:47.869242] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.959 2024/07/22 18:19:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:35.959 [2024-07-22 18:19:47.881141] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.959 [2024-07-22 18:19:47.881196] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.959 2024/07/22 18:19:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:35.959 [2024-07-22 18:19:47.893142] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.959 [2024-07-22 18:19:47.893199] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.959 2024/07/22 18:19:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:35.959 [2024-07-22 18:19:47.905210] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.959 [2024-07-22 18:19:47.905279] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.959 2024/07/22 18:19:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:35.959 [2024-07-22 18:19:47.917230] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.959 [2024-07-22 18:19:47.917301] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.959 2024/07/22 18:19:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:35.959 [2024-07-22 18:19:47.929127] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.959 [2024-07-22 18:19:47.929169] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.959 2024/07/22 18:19:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:35.959 [2024-07-22 18:19:47.941165] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.959 [2024-07-22 18:19:47.941208] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.959 2024/07/22 18:19:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:35.959 [2024-07-22 18:19:47.953151] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.959 [2024-07-22 18:19:47.953208] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.959 2024/07/22 18:19:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:35.959 [2024-07-22 18:19:47.965265] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:35.959 [2024-07-22 18:19:47.965330] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.959 2024/07/22 18:19:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:36.219 [2024-07-22 18:19:47.977272] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.219 [2024-07-22 18:19:47.977337] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.219 2024/07/22 18:19:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:36.219 [2024-07-22 18:19:47.989216] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.219 [2024-07-22 18:19:47.989278] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.219 2024/07/22 18:19:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:36.219 [2024-07-22 18:19:48.001202] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.219 [2024-07-22 18:19:48.001250] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.219 2024/07/22 18:19:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:36.219 Running I/O for 5 seconds... 00:12:36.219 [2024-07-22 18:19:48.013204] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.219 [2024-07-22 18:19:48.013249] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.219 2024/07/22 18:19:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:36.219 [2024-07-22 18:19:48.032357] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.219 [2024-07-22 18:19:48.032448] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.219 2024/07/22 18:19:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:36.219 [2024-07-22 18:19:48.051084] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.219 [2024-07-22 18:19:48.051159] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.219 2024/07/22 18:19:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:36.219 [2024-07-22 18:19:48.069409] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.219 [2024-07-22 18:19:48.069465] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.219 2024/07/22 18:19:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:36.219 [2024-07-22 18:19:48.087235] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.219 [2024-07-22 18:19:48.087285] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.219 2024/07/22 18:19:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:36.219 [2024-07-22 18:19:48.104798] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.219 [2024-07-22 18:19:48.104867] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.219 2024/07/22 18:19:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:36.219 [2024-07-22 18:19:48.118309] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.219 [2024-07-22 18:19:48.118385] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.219 2024/07/22 18:19:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:36.219 [2024-07-22 18:19:48.137836] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.219 [2024-07-22 18:19:48.137936] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.219 2024/07/22 18:19:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:36.219 [2024-07-22 18:19:48.152493] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.219 [2024-07-22 18:19:48.152545] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.219 2024/07/22 18:19:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:36.219 [2024-07-22 18:19:48.170323] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.219 [2024-07-22 18:19:48.170390] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.219 2024/07/22 18:19:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:36.219 [2024-07-22 18:19:48.188482] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.219 [2024-07-22 18:19:48.188562] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.219 2024/07/22 18:19:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:36.219 [2024-07-22 18:19:48.204685] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.219 [2024-07-22 18:19:48.204741] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.219 2024/07/22 18:19:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:36.219 [2024-07-22 18:19:48.222611] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.219 [2024-07-22 18:19:48.222665] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.219 2024/07/22 18:19:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:36.488 [2024-07-22 18:19:48.235931] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.488 [2024-07-22 18:19:48.235982] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.489 2024/07/22 18:19:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:36.489 [2024-07-22 18:19:48.254319] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.489 [2024-07-22 18:19:48.254407] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.489 2024/07/22 18:19:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:36.489 [2024-07-22 18:19:48.271571] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.489 [2024-07-22 18:19:48.271624] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.489 2024/07/22 18:19:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:36.489 [2024-07-22 18:19:48.288471] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.489 [2024-07-22 18:19:48.288521] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.489 2024/07/22 18:19:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:36.489 [2024-07-22 18:19:48.305765] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.489 [2024-07-22 18:19:48.305860] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.489 2024/07/22 18:19:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:36.489 [2024-07-22 18:19:48.319462] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.489 [2024-07-22 18:19:48.319523] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.489 2024/07/22 18:19:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:36.489 [2024-07-22 18:19:48.338017] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.489 [2024-07-22 18:19:48.338079] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.489 2024/07/22 18:19:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:36.489 [2024-07-22 18:19:48.356081] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.489 [2024-07-22 18:19:48.356183] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.489 2024/07/22 18:19:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:36.489 [2024-07-22 18:19:48.373976] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.489 [2024-07-22 18:19:48.374035] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.489 2024/07/22 18:19:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:36.489 [2024-07-22 18:19:48.387921] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.489 [2024-07-22 18:19:48.388007] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.489 2024/07/22 18:19:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:36.489 [2024-07-22 18:19:48.407747] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.489 [2024-07-22 18:19:48.407867] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.489 2024/07/22 18:19:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:36.489 [2024-07-22 18:19:48.425076] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.489 [2024-07-22 18:19:48.425124] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.489 2024/07/22 18:19:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:36.489 [2024-07-22 18:19:48.442474] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.489 [2024-07-22 18:19:48.442542] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.489 2024/07/22 18:19:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:36.489 [2024-07-22 18:19:48.457298] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.489 [2024-07-22 18:19:48.457377] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.489 2024/07/22 18:19:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:36.489 [2024-07-22 18:19:48.474389] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.489 [2024-07-22 18:19:48.474453] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.489 2024/07/22 18:19:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:36.489 [2024-07-22 18:19:48.490661] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.489 [2024-07-22 18:19:48.490749] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.489 2024/07/22 18:19:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:36.760 [2024-07-22 18:19:48.503713] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.760 [2024-07-22 18:19:48.503781] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.760 2024/07/22 18:19:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:36.760 [2024-07-22 18:19:48.522325] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.760 [2024-07-22 18:19:48.522390] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.760 2024/07/22 18:19:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:36.760 [2024-07-22 18:19:48.539253] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.760 [2024-07-22 18:19:48.539331] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.760 2024/07/22 18:19:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:36.760 [2024-07-22 18:19:48.555375] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.760 [2024-07-22 18:19:48.555436] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.760 2024/07/22 18:19:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:36.760 [2024-07-22 18:19:48.572697] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.760 [2024-07-22 18:19:48.572758] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.760 2024/07/22 18:19:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:36.760 [2024-07-22 18:19:48.589042] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.760 [2024-07-22 18:19:48.589120] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.760 2024/07/22 18:19:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:36.760 [2024-07-22 18:19:48.602948] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.760 [2024-07-22 18:19:48.602991] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.760 2024/07/22 18:19:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:36.760 [2024-07-22 18:19:48.621312] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.760 [2024-07-22 18:19:48.621388] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.760 2024/07/22 18:19:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:36.760 [2024-07-22 18:19:48.639066] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.760 [2024-07-22 18:19:48.639140] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.760 2024/07/22 18:19:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:36.760 [2024-07-22 18:19:48.656449] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.760 [2024-07-22 18:19:48.656513] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.760 2024/07/22 18:19:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:36.760 [2024-07-22 18:19:48.672153] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.760 [2024-07-22 18:19:48.672232] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.760 2024/07/22 18:19:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:36.760 [2024-07-22 18:19:48.688736] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.760 [2024-07-22 18:19:48.688808] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.760 2024/07/22 18:19:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:36.760 [2024-07-22 18:19:48.705366] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.760 [2024-07-22 18:19:48.705430] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.760 2024/07/22 18:19:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:36.760 [2024-07-22 18:19:48.722484] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.760 [2024-07-22 18:19:48.722548] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.760 2024/07/22 18:19:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:36.760 [2024-07-22 18:19:48.739372] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.760 [2024-07-22 18:19:48.739435] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.760 2024/07/22 18:19:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:36.760 [2024-07-22 18:19:48.757374] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.760 [2024-07-22 18:19:48.757437] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.760 2024/07/22 18:19:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:36.760 [2024-07-22 18:19:48.775071] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.760 [2024-07-22 18:19:48.775135] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.019 2024/07/22 18:19:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:37.019 [2024-07-22 18:19:48.788227] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.019 [2024-07-22 18:19:48.788288] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.019 2024/07/22 18:19:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:37.019 [2024-07-22 18:19:48.805828] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.019 [2024-07-22 18:19:48.805900] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.019 2024/07/22 18:19:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:37.019 [2024-07-22 18:19:48.822851] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.019 [2024-07-22 18:19:48.822921] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.019 2024/07/22 18:19:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:37.019 [2024-07-22 18:19:48.840324] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.019 [2024-07-22 18:19:48.840371] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.019 2024/07/22 18:19:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:37.019 [2024-07-22 18:19:48.857111] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.019 [2024-07-22 18:19:48.857183] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.019 2024/07/22 18:19:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:37.019 [2024-07-22 18:19:48.871011] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.019 [2024-07-22 18:19:48.871062] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.019 2024/07/22 18:19:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:37.019 [2024-07-22 18:19:48.890091] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.019 [2024-07-22 18:19:48.890141] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.019 2024/07/22 18:19:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:37.019 [2024-07-22 18:19:48.907555] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.019 [2024-07-22 18:19:48.907614] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.019 2024/07/22 18:19:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:37.019 [2024-07-22 18:19:48.925103] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.019 [2024-07-22 18:19:48.925150] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.019 2024/07/22 18:19:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:37.019 [2024-07-22 18:19:48.942787] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.019 [2024-07-22 18:19:48.942849] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.019 2024/07/22 18:19:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:37.019 [2024-07-22 18:19:48.961107] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.020 [2024-07-22 18:19:48.961157] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.020 2024/07/22 18:19:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:37.020 [2024-07-22 18:19:48.977705] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.020 [2024-07-22 18:19:48.977758] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.020 2024/07/22 18:19:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:37.020 [2024-07-22 18:19:48.989455] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.020 [2024-07-22 18:19:48.989504] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.020 2024/07/22 18:19:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:37.020 [2024-07-22 18:19:49.003964] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.020 [2024-07-22 18:19:49.004015] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.020 2024/07/22 18:19:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:37.020 [2024-07-22 18:19:49.021579] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.020 [2024-07-22 18:19:49.021628] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.020 2024/07/22 18:19:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:37.020 [2024-07-22 18:19:49.035106] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.020 [2024-07-22 18:19:49.035153] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.278 2024/07/22 18:19:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:37.278 [2024-07-22 18:19:49.053940] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.278 [2024-07-22 18:19:49.054018] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.278 2024/07/22 18:19:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:37.278 [2024-07-22 18:19:49.071793] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.278 [2024-07-22 18:19:49.071870] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.278 2024/07/22 18:19:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:37.279 [2024-07-22 18:19:49.089563] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.279 [2024-07-22 18:19:49.089615] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.279 2024/07/22 18:19:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:37.279 [2024-07-22 18:19:49.108237] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.279 [2024-07-22 18:19:49.108309] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.279 2024/07/22 18:19:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:37.279 [2024-07-22 18:19:49.125818] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.279 [2024-07-22 18:19:49.125887] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.279 2024/07/22 18:19:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:37.279 [2024-07-22 18:19:49.143675] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.279 [2024-07-22 18:19:49.143730] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.279 2024/07/22 18:19:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:37.279 [2024-07-22 18:19:49.159870] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.279 [2024-07-22 18:19:49.159943] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.279 2024/07/22 18:19:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:37.279 [2024-07-22 18:19:49.173307] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.279 [2024-07-22 18:19:49.173359] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.279 2024/07/22 18:19:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:37.279 [2024-07-22 18:19:49.191919] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.279 [2024-07-22 18:19:49.191977] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.279 2024/07/22 18:19:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:37.279 [2024-07-22 18:19:49.206735] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.279 [2024-07-22 18:19:49.206788] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.279 2024/07/22 18:19:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:37.279 [2024-07-22 18:19:49.224508] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.279 [2024-07-22 18:19:49.224566] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.279 2024/07/22 18:19:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:37.279 [2024-07-22 18:19:49.241520] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.279 [2024-07-22 18:19:49.241580] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.279 2024/07/22 18:19:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:37.279 [2024-07-22 18:19:49.259241] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.279 [2024-07-22 18:19:49.259301] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.279 2024/07/22 18:19:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:37.279 [2024-07-22 18:19:49.277197] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.279 [2024-07-22 18:19:49.277259] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.279 2024/07/22 18:19:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:37.279 [2024-07-22 18:19:49.290488] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.279 [2024-07-22 18:19:49.290542] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.279 2024/07/22 18:19:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:37.538 [2024-07-22 18:19:49.309009] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.538 [2024-07-22 18:19:49.309072] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.538 2024/07/22 18:19:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:37.538 [2024-07-22 18:19:49.327139] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.538 [2024-07-22 18:19:49.327204] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.538 2024/07/22 18:19:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:37.538 [2024-07-22 18:19:49.345159] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.538 [2024-07-22 18:19:49.345220] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.538 2024/07/22 18:19:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:37.538 [2024-07-22 18:19:49.361848] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.538 [2024-07-22 18:19:49.361905] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.538 2024/07/22 18:19:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:37.538 [2024-07-22 18:19:49.375303] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.538 [2024-07-22 18:19:49.375355] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.538 2024/07/22 18:19:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:37.538 [2024-07-22 18:19:49.394372] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.538 [2024-07-22 18:19:49.394430] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.538 2024/07/22 18:19:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:37.538 [2024-07-22 18:19:49.411778] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.538 [2024-07-22 18:19:49.411852] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.538 2024/07/22 18:19:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:37.538 [2024-07-22 18:19:49.428461] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.538 [2024-07-22 18:19:49.428517] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.538 2024/07/22 18:19:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:37.538 [2024-07-22 18:19:49.441618] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.538 [2024-07-22 18:19:49.441668] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.538 2024/07/22 18:19:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:37.538 [2024-07-22 18:19:49.460473] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.538 [2024-07-22 18:19:49.460531] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.538 2024/07/22 18:19:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:37.538 [2024-07-22 18:19:49.478523] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.538 [2024-07-22 18:19:49.478577] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.538 2024/07/22 18:19:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:37.538 [2024-07-22 18:19:49.496105] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.538 [2024-07-22 18:19:49.496164] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.538 2024/07/22 18:19:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:37.538 [2024-07-22 18:19:49.512492] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.538 [2024-07-22 18:19:49.512548] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.538 2024/07/22 18:19:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:37.538 [2024-07-22 18:19:49.530352] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.538 [2024-07-22 18:19:49.530408] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.538 2024/07/22 18:19:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:37.538 [2024-07-22 18:19:49.543448] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.538 [2024-07-22 18:19:49.543499] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.538 2024/07/22 18:19:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:37.797 [2024-07-22 18:19:49.562610] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.797 [2024-07-22 18:19:49.562671] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.797 2024/07/22 18:19:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:37.797 [2024-07-22 18:19:49.577608] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.797 [2024-07-22 18:19:49.577671] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.797 2024/07/22 18:19:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:37.797 [2024-07-22 18:19:49.594972] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.797 [2024-07-22 18:19:49.595025] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.797 2024/07/22 18:19:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:37.797 [2024-07-22 18:19:49.609412] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.797 [2024-07-22 18:19:49.609473] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.797 2024/07/22 18:19:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:37.797 [2024-07-22 18:19:49.627402] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.797 [2024-07-22 18:19:49.627456] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.797 2024/07/22 18:19:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:37.797 [2024-07-22 18:19:49.645406] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.797 [2024-07-22 18:19:49.645457] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.797 2024/07/22 18:19:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:37.797 [2024-07-22 18:19:49.662232] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.797 [2024-07-22 18:19:49.662317] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.797 2024/07/22 18:19:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:37.797 [2024-07-22 18:19:49.680599] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.797 [2024-07-22 18:19:49.680670] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.797 2024/07/22 18:19:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:37.798 [2024-07-22 18:19:49.697043] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.798 [2024-07-22 18:19:49.697096] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.798 2024/07/22 18:19:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:37.798 [2024-07-22 18:19:49.713613] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.798 [2024-07-22 18:19:49.713688] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.798 2024/07/22 18:19:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:37.798 [2024-07-22 18:19:49.732013] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.798 [2024-07-22 18:19:49.732067] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.798 2024/07/22 18:19:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:37.798 [2024-07-22 18:19:49.748249] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.798 [2024-07-22 18:19:49.748323] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.798 2024/07/22 18:19:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:37.798 [2024-07-22 18:19:49.761916] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.798 [2024-07-22 18:19:49.761975] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.798 2024/07/22 18:19:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:37.798 [2024-07-22 18:19:49.780019] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.798 [2024-07-22 18:19:49.780072] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.798 2024/07/22 18:19:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:37.798 [2024-07-22 18:19:49.794751] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.798 [2024-07-22 18:19:49.794803] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.798 2024/07/22 18:19:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:37.798 [2024-07-22 18:19:49.812199] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.798 [2024-07-22 18:19:49.812253] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.056 2024/07/22 18:19:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.056 [2024-07-22 18:19:49.830235] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.056 [2024-07-22 18:19:49.830295] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.056 2024/07/22 18:19:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.056 [2024-07-22 18:19:49.848800] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.056 [2024-07-22 18:19:49.848866] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.056 2024/07/22 18:19:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.056 [2024-07-22 18:19:49.866301] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.056 [2024-07-22 18:19:49.866357] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.056 2024/07/22 18:19:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.056 [2024-07-22 18:19:49.882643] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.056 [2024-07-22 18:19:49.882697] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.056 2024/07/22 18:19:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.056 [2024-07-22 18:19:49.900577] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.056 [2024-07-22 18:19:49.900653] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.056 2024/07/22 18:19:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.056 [2024-07-22 18:19:49.917129] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.056 [2024-07-22 18:19:49.917184] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.056 2024/07/22 18:19:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.056 [2024-07-22 18:19:49.933289] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.056 [2024-07-22 18:19:49.933342] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.056 2024/07/22 18:19:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.056 [2024-07-22 18:19:49.949707] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.056 [2024-07-22 18:19:49.949759] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.056 2024/07/22 18:19:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.056 [2024-07-22 18:19:49.966174] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.056 [2024-07-22 18:19:49.966231] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.056 2024/07/22 18:19:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.056 [2024-07-22 18:19:49.984168] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.056 [2024-07-22 18:19:49.984224] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.056 2024/07/22 18:19:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.056 [2024-07-22 18:19:50.001865] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.056 [2024-07-22 18:19:50.001917] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.056 2024/07/22 18:19:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.056 [2024-07-22 18:19:50.014849] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.056 [2024-07-22 18:19:50.014893] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.056 2024/07/22 18:19:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.056 [2024-07-22 18:19:50.033446] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.056 [2024-07-22 18:19:50.033499] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.056 2024/07/22 18:19:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.056 [2024-07-22 18:19:50.050066] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.056 [2024-07-22 18:19:50.050123] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.056 2024/07/22 18:19:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.056 [2024-07-22 18:19:50.063678] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.056 [2024-07-22 18:19:50.063740] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.056 2024/07/22 18:19:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.315 [2024-07-22 18:19:50.082155] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.315 [2024-07-22 18:19:50.082220] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.315 2024/07/22 18:19:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.315 [2024-07-22 18:19:50.100737] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.315 [2024-07-22 18:19:50.100788] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.315 2024/07/22 18:19:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.315 [2024-07-22 18:19:50.118637] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.315 [2024-07-22 18:19:50.118690] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.315 2024/07/22 18:19:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.315 [2024-07-22 18:19:50.136446] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.315 [2024-07-22 18:19:50.136508] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.315 2024/07/22 18:19:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.315 [2024-07-22 18:19:50.154187] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.315 [2024-07-22 18:19:50.154237] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.315 2024/07/22 18:19:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.315 [2024-07-22 18:19:50.172024] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.315 [2024-07-22 18:19:50.172079] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.315 2024/07/22 18:19:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.315 [2024-07-22 18:19:50.186274] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.315 [2024-07-22 18:19:50.186327] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.315 2024/07/22 18:19:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.315 [2024-07-22 18:19:50.204580] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.315 [2024-07-22 18:19:50.204637] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.315 2024/07/22 18:19:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.315 [2024-07-22 18:19:50.222788] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.315 [2024-07-22 18:19:50.222856] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.315 2024/07/22 18:19:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.315 [2024-07-22 18:19:50.240630] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.315 [2024-07-22 18:19:50.240683] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.315 2024/07/22 18:19:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.315 [2024-07-22 18:19:50.258656] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.315 [2024-07-22 18:19:50.258715] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.315 2024/07/22 18:19:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.315 [2024-07-22 18:19:50.272375] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.315 [2024-07-22 18:19:50.272426] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.315 2024/07/22 18:19:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.315 [2024-07-22 18:19:50.291508] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.315 [2024-07-22 18:19:50.291568] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.315 2024/07/22 18:19:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.315 [2024-07-22 18:19:50.310145] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.315 [2024-07-22 18:19:50.310198] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.315 2024/07/22 18:19:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.315 [2024-07-22 18:19:50.326977] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.316 [2024-07-22 18:19:50.327030] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.316 2024/07/22 18:19:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.574 [2024-07-22 18:19:50.340520] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.574 [2024-07-22 18:19:50.340582] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.574 2024/07/22 18:19:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.574 [2024-07-22 18:19:50.361068] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.574 [2024-07-22 18:19:50.361138] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.574 2024/07/22 18:19:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.574 [2024-07-22 18:19:50.380088] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.574 [2024-07-22 18:19:50.380162] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.574 2024/07/22 18:19:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.574 [2024-07-22 18:19:50.398300] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.574 [2024-07-22 18:19:50.398378] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.574 2024/07/22 18:19:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.574 [2024-07-22 18:19:50.411818] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.574 [2024-07-22 18:19:50.411900] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.574 2024/07/22 18:19:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.574 [2024-07-22 18:19:50.431383] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.574 [2024-07-22 18:19:50.431448] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.574 2024/07/22 18:19:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.574 [2024-07-22 18:19:50.449581] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.574 [2024-07-22 18:19:50.449642] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.574 2024/07/22 18:19:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.574 [2024-07-22 18:19:50.468473] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.574 [2024-07-22 18:19:50.468541] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.574 2024/07/22 18:19:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.574 [2024-07-22 18:19:50.485121] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.574 [2024-07-22 18:19:50.485178] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.574 2024/07/22 18:19:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.574 [2024-07-22 18:19:50.500970] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.574 [2024-07-22 18:19:50.501028] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.574 2024/07/22 18:19:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.574 [2024-07-22 18:19:50.519048] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.574 [2024-07-22 18:19:50.519134] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.574 2024/07/22 18:19:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.574 [2024-07-22 18:19:50.535940] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.574 [2024-07-22 18:19:50.536005] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.575 2024/07/22 18:19:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.575 [2024-07-22 18:19:50.549311] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.575 [2024-07-22 18:19:50.549370] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.575 2024/07/22 18:19:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.575 [2024-07-22 18:19:50.568520] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.575 [2024-07-22 18:19:50.568588] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.575 2024/07/22 18:19:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.575 [2024-07-22 18:19:50.585705] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.575 [2024-07-22 18:19:50.585768] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.575 2024/07/22 18:19:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.834 [2024-07-22 18:19:50.602771] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.834 [2024-07-22 18:19:50.602826] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.834 2024/07/22 18:19:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.834 [2024-07-22 18:19:50.620571] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.834 [2024-07-22 18:19:50.620626] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.834 2024/07/22 18:19:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.834 [2024-07-22 18:19:50.637150] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.834 [2024-07-22 18:19:50.637213] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.834 2024/07/22 18:19:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.834 [2024-07-22 18:19:50.651013] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.834 [2024-07-22 18:19:50.651088] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.834 2024/07/22 18:19:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.834 [2024-07-22 18:19:50.669873] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.834 [2024-07-22 18:19:50.669958] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.834 2024/07/22 18:19:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.834 [2024-07-22 18:19:50.688326] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.834 [2024-07-22 18:19:50.688378] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.834 2024/07/22 18:19:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.834 [2024-07-22 18:19:50.705039] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.834 [2024-07-22 18:19:50.705097] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.834 2024/07/22 18:19:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.834 [2024-07-22 18:19:50.721366] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.834 [2024-07-22 18:19:50.721427] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.834 2024/07/22 18:19:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.834 [2024-07-22 18:19:50.733579] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.834 [2024-07-22 18:19:50.733627] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.834 2024/07/22 18:19:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.834 [2024-07-22 18:19:50.751388] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.834 [2024-07-22 18:19:50.751446] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.834 2024/07/22 18:19:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.834 [2024-07-22 18:19:50.768954] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.834 [2024-07-22 18:19:50.769011] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.834 2024/07/22 18:19:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.834 [2024-07-22 18:19:50.785786] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.834 [2024-07-22 18:19:50.785856] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.834 2024/07/22 18:19:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.834 [2024-07-22 18:19:50.803776] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.834 [2024-07-22 18:19:50.803870] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.834 2024/07/22 18:19:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.834 [2024-07-22 18:19:50.817106] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.834 [2024-07-22 18:19:50.817169] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.834 2024/07/22 18:19:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.834 [2024-07-22 18:19:50.833382] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.834 [2024-07-22 18:19:50.833432] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.834 2024/07/22 18:19:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.164 [2024-07-22 18:19:50.851542] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.164 [2024-07-22 18:19:50.851593] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.164 2024/07/22 18:19:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.164 [2024-07-22 18:19:50.868086] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.164 [2024-07-22 18:19:50.868138] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.164 2024/07/22 18:19:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.164 [2024-07-22 18:19:50.884701] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.164 [2024-07-22 18:19:50.884784] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.164 2024/07/22 18:19:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.164 [2024-07-22 18:19:50.903075] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.164 [2024-07-22 18:19:50.903161] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.164 2024/07/22 18:19:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.164 [2024-07-22 18:19:50.917792] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.165 [2024-07-22 18:19:50.917861] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.165 2024/07/22 18:19:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.165 [2024-07-22 18:19:50.936940] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.165 [2024-07-22 18:19:50.937002] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.165 2024/07/22 18:19:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.165 [2024-07-22 18:19:50.955156] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.165 [2024-07-22 18:19:50.955208] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.165 2024/07/22 18:19:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.165 [2024-07-22 18:19:50.972899] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.165 [2024-07-22 18:19:50.972952] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.165 2024/07/22 18:19:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.165 [2024-07-22 18:19:50.990175] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.165 [2024-07-22 18:19:50.990232] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.165 2024/07/22 18:19:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.165 [2024-07-22 18:19:51.006383] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.165 [2024-07-22 18:19:51.006436] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.165 2024/07/22 18:19:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.165 [2024-07-22 18:19:51.023901] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.165 [2024-07-22 18:19:51.023986] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.165 2024/07/22 18:19:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.165 [2024-07-22 18:19:51.041715] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.165 [2024-07-22 18:19:51.041768] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.165 2024/07/22 18:19:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.165 [2024-07-22 18:19:51.059264] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.165 [2024-07-22 18:19:51.059321] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.165 2024/07/22 18:19:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.165 [2024-07-22 18:19:51.077218] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.165 [2024-07-22 18:19:51.077295] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.165 2024/07/22 18:19:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.165 [2024-07-22 18:19:51.094144] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.165 [2024-07-22 18:19:51.094219] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.165 2024/07/22 18:19:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.165 [2024-07-22 18:19:51.110686] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.165 [2024-07-22 18:19:51.110741] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.165 2024/07/22 18:19:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.165 [2024-07-22 18:19:51.129444] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.165 [2024-07-22 18:19:51.129500] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.165 2024/07/22 18:19:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.165 [2024-07-22 18:19:51.145453] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.165 [2024-07-22 18:19:51.145505] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.165 2024/07/22 18:19:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.423 [2024-07-22 18:19:51.162781] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.423 [2024-07-22 18:19:51.162845] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.423 2024/07/22 18:19:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.423 [2024-07-22 18:19:51.180293] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.423 [2024-07-22 18:19:51.180346] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.423 2024/07/22 18:19:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.423 [2024-07-22 18:19:51.196791] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.423 [2024-07-22 18:19:51.196883] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.423 2024/07/22 18:19:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.423 [2024-07-22 18:19:51.210338] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.423 [2024-07-22 18:19:51.210428] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.423 2024/07/22 18:19:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.423 [2024-07-22 18:19:51.229713] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.423 [2024-07-22 18:19:51.229783] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.423 2024/07/22 18:19:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.423 [2024-07-22 18:19:51.246420] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.423 [2024-07-22 18:19:51.246496] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.423 2024/07/22 18:19:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.423 [2024-07-22 18:19:51.263261] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.423 [2024-07-22 18:19:51.263315] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.423 2024/07/22 18:19:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.423 [2024-07-22 18:19:51.279599] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.423 [2024-07-22 18:19:51.279653] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.423 2024/07/22 18:19:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.423 [2024-07-22 18:19:51.292678] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.423 [2024-07-22 18:19:51.292729] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.423 2024/07/22 18:19:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.423 [2024-07-22 18:19:51.311170] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.423 [2024-07-22 18:19:51.311224] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.423 2024/07/22 18:19:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.423 [2024-07-22 18:19:51.329298] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.423 [2024-07-22 18:19:51.329354] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.423 2024/07/22 18:19:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.423 [2024-07-22 18:19:51.346788] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.423 [2024-07-22 18:19:51.346856] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.423 2024/07/22 18:19:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.423 [2024-07-22 18:19:51.363650] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.424 [2024-07-22 18:19:51.363704] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.424 2024/07/22 18:19:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.424 [2024-07-22 18:19:51.376747] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.424 [2024-07-22 18:19:51.376797] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.424 2024/07/22 18:19:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.424 [2024-07-22 18:19:51.395757] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.424 [2024-07-22 18:19:51.395814] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.424 2024/07/22 18:19:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.424 [2024-07-22 18:19:51.413870] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.424 [2024-07-22 18:19:51.413924] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.424 2024/07/22 18:19:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.424 [2024-07-22 18:19:51.431463] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.424 [2024-07-22 18:19:51.431516] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.424 2024/07/22 18:19:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.684 [2024-07-22 18:19:51.449018] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.684 [2024-07-22 18:19:51.449072] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.684 2024/07/22 18:19:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.684 [2024-07-22 18:19:51.466450] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.684 [2024-07-22 18:19:51.466502] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.684 2024/07/22 18:19:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.684 [2024-07-22 18:19:51.484016] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.684 [2024-07-22 18:19:51.484075] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.684 2024/07/22 18:19:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.684 [2024-07-22 18:19:51.497590] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.684 [2024-07-22 18:19:51.497663] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.684 2024/07/22 18:19:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.684 [2024-07-22 18:19:51.516930] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.684 [2024-07-22 18:19:51.517014] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.684 2024/07/22 18:19:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.684 [2024-07-22 18:19:51.534731] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.684 [2024-07-22 18:19:51.534789] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.684 2024/07/22 18:19:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.684 [2024-07-22 18:19:51.552613] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.684 [2024-07-22 18:19:51.552671] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.684 2024/07/22 18:19:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.684 [2024-07-22 18:19:51.569274] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.684 [2024-07-22 18:19:51.569331] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.684 2024/07/22 18:19:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.684 [2024-07-22 18:19:51.581609] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.684 [2024-07-22 18:19:51.581663] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.684 2024/07/22 18:19:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.684 [2024-07-22 18:19:51.598414] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.684 [2024-07-22 18:19:51.598476] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.684 2024/07/22 18:19:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.684 [2024-07-22 18:19:51.616389] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.684 [2024-07-22 18:19:51.616451] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.685 2024/07/22 18:19:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.685 [2024-07-22 18:19:51.633386] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.685 [2024-07-22 18:19:51.633441] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.685 2024/07/22 18:19:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.685 [2024-07-22 18:19:51.650539] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.685 [2024-07-22 18:19:51.650608] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.685 2024/07/22 18:19:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.685 [2024-07-22 18:19:51.667476] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.685 [2024-07-22 18:19:51.667555] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.685 2024/07/22 18:19:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.685 [2024-07-22 18:19:51.684947] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.685 [2024-07-22 18:19:51.685004] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.685 2024/07/22 18:19:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.943 [2024-07-22 18:19:51.702127] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.943 [2024-07-22 18:19:51.702194] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.943 2024/07/22 18:19:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.943 [2024-07-22 18:19:51.715413] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.943 [2024-07-22 18:19:51.715470] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.943 2024/07/22 18:19:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.943 [2024-07-22 18:19:51.735096] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.943 [2024-07-22 18:19:51.735164] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.943 2024/07/22 18:19:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.943 [2024-07-22 18:19:51.752462] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.943 [2024-07-22 18:19:51.752562] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.943 2024/07/22 18:19:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.943 [2024-07-22 18:19:51.769447] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.943 [2024-07-22 18:19:51.769509] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.943 2024/07/22 18:19:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.943 [2024-07-22 18:19:51.787077] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.943 [2024-07-22 18:19:51.787131] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.943 2024/07/22 18:19:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.943 [2024-07-22 18:19:51.803563] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.943 [2024-07-22 18:19:51.803620] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.943 2024/07/22 18:19:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.943 [2024-07-22 18:19:51.821337] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.943 [2024-07-22 18:19:51.821391] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.943 2024/07/22 18:19:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.943 [2024-07-22 18:19:51.838152] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.943 [2024-07-22 18:19:51.838233] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.943 2024/07/22 18:19:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.943 [2024-07-22 18:19:51.851522] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.943 [2024-07-22 18:19:51.851578] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.943 2024/07/22 18:19:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.943 [2024-07-22 18:19:51.872568] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.943 [2024-07-22 18:19:51.872633] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.944 2024/07/22 18:19:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.944 [2024-07-22 18:19:51.890147] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.944 [2024-07-22 18:19:51.890213] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.944 2024/07/22 18:19:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.944 [2024-07-22 18:19:51.906867] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.944 [2024-07-22 18:19:51.906925] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.944 2024/07/22 18:19:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.944 [2024-07-22 18:19:51.920043] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.944 [2024-07-22 18:19:51.920121] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.944 2024/07/22 18:19:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.944 [2024-07-22 18:19:51.939385] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.944 [2024-07-22 18:19:51.939459] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.944 2024/07/22 18:19:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.944 [2024-07-22 18:19:51.958001] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.944 [2024-07-22 18:19:51.958060] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.203 2024/07/22 18:19:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.203 [2024-07-22 18:19:51.973690] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.203 [2024-07-22 18:19:51.973765] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.203 2024/07/22 18:19:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.203 [2024-07-22 18:19:51.991081] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.203 [2024-07-22 18:19:51.991137] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.203 2024/07/22 18:19:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.203 [2024-07-22 18:19:52.010058] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.203 [2024-07-22 18:19:52.010116] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.203 2024/07/22 18:19:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.203 [2024-07-22 18:19:52.028750] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.203 [2024-07-22 18:19:52.028819] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.203 2024/07/22 18:19:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.203 [2024-07-22 18:19:52.043262] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.203 [2024-07-22 18:19:52.043317] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.203 2024/07/22 18:19:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.203 [2024-07-22 18:19:52.062181] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.203 [2024-07-22 18:19:52.062243] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.203 2024/07/22 18:19:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.203 [2024-07-22 18:19:52.080737] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.203 [2024-07-22 18:19:52.080817] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.203 2024/07/22 18:19:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.203 [2024-07-22 18:19:52.097985] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.203 [2024-07-22 18:19:52.098040] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.203 2024/07/22 18:19:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.203 [2024-07-22 18:19:52.116196] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.203 [2024-07-22 18:19:52.116258] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.203 2024/07/22 18:19:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.203 [2024-07-22 18:19:52.130059] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.203 [2024-07-22 18:19:52.130111] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.203 2024/07/22 18:19:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.203 [2024-07-22 18:19:52.148945] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.203 [2024-07-22 18:19:52.149005] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.203 2024/07/22 18:19:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.203 [2024-07-22 18:19:52.166014] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.203 [2024-07-22 18:19:52.166067] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.203 2024/07/22 18:19:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.203 [2024-07-22 18:19:52.182650] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.203 [2024-07-22 18:19:52.182708] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.203 2024/07/22 18:19:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.203 [2024-07-22 18:19:52.199590] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.203 [2024-07-22 18:19:52.199661] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.203 2024/07/22 18:19:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.203 [2024-07-22 18:19:52.213199] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.203 [2024-07-22 18:19:52.213255] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.203 2024/07/22 18:19:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.511 [2024-07-22 18:19:52.232219] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.511 [2024-07-22 18:19:52.232283] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.511 2024/07/22 18:19:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.511 [2024-07-22 18:19:52.247616] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.511 [2024-07-22 18:19:52.247671] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.511 2024/07/22 18:19:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.511 [2024-07-22 18:19:52.265846] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.511 [2024-07-22 18:19:52.265898] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.511 2024/07/22 18:19:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.511 [2024-07-22 18:19:52.284064] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.511 [2024-07-22 18:19:52.284121] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.511 2024/07/22 18:19:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.511 [2024-07-22 18:19:52.300850] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.511 [2024-07-22 18:19:52.300911] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.511 2024/07/22 18:19:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.511 [2024-07-22 18:19:52.314260] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.511 [2024-07-22 18:19:52.314317] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.511 2024/07/22 18:19:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.511 [2024-07-22 18:19:52.333451] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.511 [2024-07-22 18:19:52.333508] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.511 2024/07/22 18:19:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.511 [2024-07-22 18:19:52.351502] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.511 [2024-07-22 18:19:52.351557] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.511 2024/07/22 18:19:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.511 [2024-07-22 18:19:52.365651] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.511 [2024-07-22 18:19:52.365705] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.511 2024/07/22 18:19:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.511 [2024-07-22 18:19:52.381796] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.511 [2024-07-22 18:19:52.381865] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.512 2024/07/22 18:19:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.512 [2024-07-22 18:19:52.399606] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.512 [2024-07-22 18:19:52.399660] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.512 2024/07/22 18:19:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.512 [2024-07-22 18:19:52.417672] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.512 [2024-07-22 18:19:52.417726] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.512 2024/07/22 18:19:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.512 [2024-07-22 18:19:52.436428] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.512 [2024-07-22 18:19:52.436492] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.512 2024/07/22 18:19:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.512 [2024-07-22 18:19:52.454808] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.512 [2024-07-22 18:19:52.454904] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.512 2024/07/22 18:19:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.512 [2024-07-22 18:19:52.473043] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.512 [2024-07-22 18:19:52.473098] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.512 2024/07/22 18:19:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.512 [2024-07-22 18:19:52.490444] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.512 [2024-07-22 18:19:52.490500] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.512 2024/07/22 18:19:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.512 [2024-07-22 18:19:52.506815] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.512 [2024-07-22 18:19:52.506884] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.512 2024/07/22 18:19:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.512 [2024-07-22 18:19:52.524757] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.512 [2024-07-22 18:19:52.524812] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.771 2024/07/22 18:19:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.771 [2024-07-22 18:19:52.542447] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.771 [2024-07-22 18:19:52.542507] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.771 2024/07/22 18:19:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.771 [2024-07-22 18:19:52.560368] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.771 [2024-07-22 18:19:52.560436] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.771 2024/07/22 18:19:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.771 [2024-07-22 18:19:52.578432] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.771 [2024-07-22 18:19:52.578516] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.771 2024/07/22 18:19:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.771 [2024-07-22 18:19:52.592518] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.771 [2024-07-22 18:19:52.592575] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.771 2024/07/22 18:19:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.771 [2024-07-22 18:19:52.612274] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.771 [2024-07-22 18:19:52.612342] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.771 2024/07/22 18:19:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.772 [2024-07-22 18:19:52.629980] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.772 [2024-07-22 18:19:52.630040] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.772 2024/07/22 18:19:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.772 [2024-07-22 18:19:52.646815] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.772 [2024-07-22 18:19:52.646887] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.772 2024/07/22 18:19:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.772 [2024-07-22 18:19:52.660094] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.772 [2024-07-22 18:19:52.660153] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.772 2024/07/22 18:19:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.772 [2024-07-22 18:19:52.682886] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.772 [2024-07-22 18:19:52.683004] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.772 2024/07/22 18:19:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.772 [2024-07-22 18:19:52.700727] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.772 [2024-07-22 18:19:52.700799] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.772 2024/07/22 18:19:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.772 [2024-07-22 18:19:52.721161] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.772 [2024-07-22 18:19:52.721244] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.772 2024/07/22 18:19:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.772 [2024-07-22 18:19:52.742535] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.772 [2024-07-22 18:19:52.742640] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.772 2024/07/22 18:19:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.772 [2024-07-22 18:19:52.761095] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.772 [2024-07-22 18:19:52.761156] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.772 2024/07/22 18:19:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.772 [2024-07-22 18:19:52.779207] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.772 [2024-07-22 18:19:52.779299] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.772 2024/07/22 18:19:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.031 [2024-07-22 18:19:52.794170] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.031 [2024-07-22 18:19:52.794247] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.031 2024/07/22 18:19:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.031 [2024-07-22 18:19:52.814458] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.031 [2024-07-22 18:19:52.814543] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.031 2024/07/22 18:19:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.031 [2024-07-22 18:19:52.831312] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.031 [2024-07-22 18:19:52.831389] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.031 2024/07/22 18:19:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.031 [2024-07-22 18:19:52.848510] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.031 [2024-07-22 18:19:52.848569] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.031 2024/07/22 18:19:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.031 [2024-07-22 18:19:52.867437] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.031 [2024-07-22 18:19:52.867499] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.031 2024/07/22 18:19:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.032 [2024-07-22 18:19:52.884260] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.032 [2024-07-22 18:19:52.884315] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.032 2024/07/22 18:19:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.032 [2024-07-22 18:19:52.901022] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.032 [2024-07-22 18:19:52.901078] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.032 2024/07/22 18:19:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.032 [2024-07-22 18:19:52.919894] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.032 [2024-07-22 18:19:52.919989] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.032 2024/07/22 18:19:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.032 [2024-07-22 18:19:52.937374] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.032 [2024-07-22 18:19:52.937438] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.032 2024/07/22 18:19:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.032 [2024-07-22 18:19:52.951174] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.032 [2024-07-22 18:19:52.951227] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.032 2024/07/22 18:19:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.032 [2024-07-22 18:19:52.969589] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.032 [2024-07-22 18:19:52.969651] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.032 2024/07/22 18:19:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.032 [2024-07-22 18:19:52.987022] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.032 [2024-07-22 18:19:52.987079] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.032 2024/07/22 18:19:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.032 [2024-07-22 18:19:53.005048] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.032 [2024-07-22 18:19:53.005135] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.032 2024/07/22 18:19:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.032 [2024-07-22 18:19:53.022443] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.032 [2024-07-22 18:19:53.022517] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.032 2024/07/22 18:19:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.032 00:12:41.032 Latency(us) 00:12:41.032 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:41.032 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:12:41.032 Nvme1n1 : 5.02 8440.00 65.94 0.00 0.00 15136.27 4527.94 32410.53 00:12:41.032 =================================================================================================================== 00:12:41.032 Total : 8440.00 65.94 0.00 0.00 15136.27 4527.94 32410.53 00:12:41.032 [2024-07-22 18:19:53.036701] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.032 [2024-07-22 18:19:53.036758] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.032 2024/07/22 18:19:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.032 [2024-07-22 18:19:53.044662] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.032 [2024-07-22 18:19:53.044710] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.291 2024/07/22 18:19:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.291 [2024-07-22 18:19:53.056651] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.291 [2024-07-22 18:19:53.056718] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.291 2024/07/22 18:19:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.291 [2024-07-22 18:19:53.068686] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.291 [2024-07-22 18:19:53.068732] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.291 2024/07/22 18:19:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.291 [2024-07-22 18:19:53.080729] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.291 [2024-07-22 18:19:53.080789] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.291 2024/07/22 18:19:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.291 [2024-07-22 18:19:53.092789] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.291 [2024-07-22 18:19:53.092866] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.291 2024/07/22 18:19:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.291 [2024-07-22 18:19:53.104681] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.291 [2024-07-22 18:19:53.104725] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.291 2024/07/22 18:19:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.291 [2024-07-22 18:19:53.116678] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.291 [2024-07-22 18:19:53.116719] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.291 2024/07/22 18:19:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.291 [2024-07-22 18:19:53.128701] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.291 [2024-07-22 18:19:53.128744] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.291 2024/07/22 18:19:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.291 [2024-07-22 18:19:53.140710] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.291 [2024-07-22 18:19:53.140754] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.291 2024/07/22 18:19:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.291 [2024-07-22 18:19:53.152668] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.291 [2024-07-22 18:19:53.152710] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.291 2024/07/22 18:19:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.291 [2024-07-22 18:19:53.164781] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.291 [2024-07-22 18:19:53.164854] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.291 2024/07/22 18:19:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.291 [2024-07-22 18:19:53.176769] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.291 [2024-07-22 18:19:53.176830] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.291 2024/07/22 18:19:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.291 [2024-07-22 18:19:53.188759] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.291 [2024-07-22 18:19:53.188809] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.291 2024/07/22 18:19:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.291 [2024-07-22 18:19:53.200706] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.291 [2024-07-22 18:19:53.200748] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.291 2024/07/22 18:19:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.291 [2024-07-22 18:19:53.212703] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.291 [2024-07-22 18:19:53.212744] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.291 2024/07/22 18:19:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.292 [2024-07-22 18:19:53.224725] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.292 [2024-07-22 18:19:53.224770] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.292 2024/07/22 18:19:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.292 [2024-07-22 18:19:53.236772] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.292 [2024-07-22 18:19:53.236826] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.292 2024/07/22 18:19:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.292 [2024-07-22 18:19:53.248786] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.292 [2024-07-22 18:19:53.248859] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.292 2024/07/22 18:19:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.292 [2024-07-22 18:19:53.260746] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.292 [2024-07-22 18:19:53.260785] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.292 2024/07/22 18:19:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.292 [2024-07-22 18:19:53.272807] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.292 [2024-07-22 18:19:53.272880] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.292 2024/07/22 18:19:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.292 [2024-07-22 18:19:53.284793] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.292 [2024-07-22 18:19:53.284855] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.292 2024/07/22 18:19:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.292 [2024-07-22 18:19:53.296743] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.292 [2024-07-22 18:19:53.296784] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.292 2024/07/22 18:19:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.550 [2024-07-22 18:19:53.308743] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.550 [2024-07-22 18:19:53.308787] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.550 2024/07/22 18:19:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.550 [2024-07-22 18:19:53.320824] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.550 [2024-07-22 18:19:53.320896] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.550 2024/07/22 18:19:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.550 [2024-07-22 18:19:53.332809] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.550 [2024-07-22 18:19:53.332876] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.550 2024/07/22 18:19:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.550 [2024-07-22 18:19:53.344741] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.550 [2024-07-22 18:19:53.344783] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.550 2024/07/22 18:19:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.550 [2024-07-22 18:19:53.356788] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.550 [2024-07-22 18:19:53.356830] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.550 2024/07/22 18:19:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.550 [2024-07-22 18:19:53.368748] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.550 [2024-07-22 18:19:53.368790] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.550 2024/07/22 18:19:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.550 [2024-07-22 18:19:53.380852] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.550 [2024-07-22 18:19:53.380907] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.550 2024/07/22 18:19:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.550 [2024-07-22 18:19:53.392890] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.550 [2024-07-22 18:19:53.392953] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.550 2024/07/22 18:19:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.550 [2024-07-22 18:19:53.404867] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.550 [2024-07-22 18:19:53.404929] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.550 2024/07/22 18:19:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.550 [2024-07-22 18:19:53.416943] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.550 [2024-07-22 18:19:53.417010] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.550 2024/07/22 18:19:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.550 [2024-07-22 18:19:53.428929] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.550 [2024-07-22 18:19:53.428994] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.550 2024/07/22 18:19:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.550 [2024-07-22 18:19:53.440897] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.550 [2024-07-22 18:19:53.440958] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.550 2024/07/22 18:19:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.550 [2024-07-22 18:19:53.452874] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.550 [2024-07-22 18:19:53.452918] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.550 2024/07/22 18:19:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.550 [2024-07-22 18:19:53.464781] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.550 [2024-07-22 18:19:53.464822] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.550 2024/07/22 18:19:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.550 [2024-07-22 18:19:53.476814] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.550 [2024-07-22 18:19:53.476871] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.550 2024/07/22 18:19:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.550 [2024-07-22 18:19:53.488808] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.550 [2024-07-22 18:19:53.488864] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.550 2024/07/22 18:19:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.550 [2024-07-22 18:19:53.500788] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.550 [2024-07-22 18:19:53.500843] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.550 2024/07/22 18:19:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.550 [2024-07-22 18:19:53.512896] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.550 [2024-07-22 18:19:53.512949] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.550 2024/07/22 18:19:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.550 [2024-07-22 18:19:53.524918] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.550 [2024-07-22 18:19:53.524988] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.550 2024/07/22 18:19:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.550 [2024-07-22 18:19:53.536893] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.550 [2024-07-22 18:19:53.536953] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.550 2024/07/22 18:19:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.550 [2024-07-22 18:19:53.548910] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.550 [2024-07-22 18:19:53.548965] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.550 2024/07/22 18:19:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.550 [2024-07-22 18:19:53.560852] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.550 [2024-07-22 18:19:53.560895] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.550 2024/07/22 18:19:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.809 [2024-07-22 18:19:53.572858] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.809 [2024-07-22 18:19:53.572899] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.809 2024/07/22 18:19:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.809 [2024-07-22 18:19:53.584859] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.809 [2024-07-22 18:19:53.584902] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.809 2024/07/22 18:19:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.809 [2024-07-22 18:19:53.596874] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.809 [2024-07-22 18:19:53.596916] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.809 2024/07/22 18:19:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.809 [2024-07-22 18:19:53.608876] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.809 [2024-07-22 18:19:53.608926] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.809 2024/07/22 18:19:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.809 [2024-07-22 18:19:53.620879] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.809 [2024-07-22 18:19:53.620919] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.809 2024/07/22 18:19:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.809 [2024-07-22 18:19:53.632858] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.809 [2024-07-22 18:19:53.632897] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.809 2024/07/22 18:19:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.809 [2024-07-22 18:19:53.648893] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.809 [2024-07-22 18:19:53.648933] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.809 2024/07/22 18:19:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.809 [2024-07-22 18:19:53.660898] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.809 [2024-07-22 18:19:53.660935] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.809 2024/07/22 18:19:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.809 [2024-07-22 18:19:53.672891] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.809 [2024-07-22 18:19:53.672926] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.809 2024/07/22 18:19:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.809 [2024-07-22 18:19:53.684919] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.809 [2024-07-22 18:19:53.684961] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.809 2024/07/22 18:19:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.809 [2024-07-22 18:19:53.696884] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.809 [2024-07-22 18:19:53.696922] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.809 2024/07/22 18:19:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.809 [2024-07-22 18:19:53.708931] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.809 [2024-07-22 18:19:53.708969] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.809 2024/07/22 18:19:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.809 [2024-07-22 18:19:53.720921] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.809 [2024-07-22 18:19:53.720958] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.809 2024/07/22 18:19:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.809 [2024-07-22 18:19:53.732933] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.809 [2024-07-22 18:19:53.732973] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.809 2024/07/22 18:19:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.809 [2024-07-22 18:19:53.744973] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.809 [2024-07-22 18:19:53.745019] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.809 2024/07/22 18:19:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.809 [2024-07-22 18:19:53.756950] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.809 [2024-07-22 18:19:53.757000] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.810 2024/07/22 18:19:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.810 [2024-07-22 18:19:53.768946] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.810 [2024-07-22 18:19:53.768982] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.810 2024/07/22 18:19:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.810 [2024-07-22 18:19:53.780940] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.810 [2024-07-22 18:19:53.780976] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.810 2024/07/22 18:19:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.810 [2024-07-22 18:19:53.793014] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.810 [2024-07-22 18:19:53.793072] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.810 2024/07/22 18:19:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.810 [2024-07-22 18:19:53.805019] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.810 [2024-07-22 18:19:53.805077] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.810 2024/07/22 18:19:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.810 [2024-07-22 18:19:53.816967] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.810 [2024-07-22 18:19:53.817010] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.810 2024/07/22 18:19:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:42.069 [2024-07-22 18:19:53.828970] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.069 [2024-07-22 18:19:53.829008] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.069 2024/07/22 18:19:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:42.069 [2024-07-22 18:19:53.840971] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.069 [2024-07-22 18:19:53.841008] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.069 2024/07/22 18:19:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:42.069 [2024-07-22 18:19:53.852944] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.069 [2024-07-22 18:19:53.852982] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.069 2024/07/22 18:19:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:42.069 [2024-07-22 18:19:53.864979] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.069 [2024-07-22 18:19:53.865017] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.069 2024/07/22 18:19:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:42.069 [2024-07-22 18:19:53.876994] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.069 [2024-07-22 18:19:53.877035] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.069 2024/07/22 18:19:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:42.069 [2024-07-22 18:19:53.889006] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.069 [2024-07-22 18:19:53.889046] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.069 2024/07/22 18:19:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:42.069 [2024-07-22 18:19:53.900992] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.069 [2024-07-22 18:19:53.901029] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.069 2024/07/22 18:19:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:42.069 [2024-07-22 18:19:53.912991] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.069 [2024-07-22 18:19:53.913033] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.069 2024/07/22 18:19:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:42.069 [2024-07-22 18:19:53.925024] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.069 [2024-07-22 18:19:53.925070] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.069 2024/07/22 18:19:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:42.069 [2024-07-22 18:19:53.937075] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.069 [2024-07-22 18:19:53.937127] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.069 2024/07/22 18:19:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:42.069 [2024-07-22 18:19:53.948991] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.069 [2024-07-22 18:19:53.949032] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.069 2024/07/22 18:19:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:42.069 [2024-07-22 18:19:53.961041] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.069 [2024-07-22 18:19:53.961090] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.069 2024/07/22 18:19:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:42.069 [2024-07-22 18:19:53.973154] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.069 [2024-07-22 18:19:53.973214] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.069 2024/07/22 18:19:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:42.069 [2024-07-22 18:19:53.985048] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.069 [2024-07-22 18:19:53.985093] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.069 2024/07/22 18:19:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:42.069 [2024-07-22 18:19:53.997058] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.069 [2024-07-22 18:19:53.997101] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.069 2024/07/22 18:19:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:42.069 [2024-07-22 18:19:54.009036] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.069 [2024-07-22 18:19:54.009080] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.069 2024/07/22 18:19:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:42.069 [2024-07-22 18:19:54.021094] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.069 [2024-07-22 18:19:54.021150] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.069 2024/07/22 18:19:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:42.069 [2024-07-22 18:19:54.033099] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.069 [2024-07-22 18:19:54.033142] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.069 2024/07/22 18:19:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:42.069 [2024-07-22 18:19:54.045022] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.069 [2024-07-22 18:19:54.045057] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.070 2024/07/22 18:19:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:42.070 [2024-07-22 18:19:54.057050] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.070 [2024-07-22 18:19:54.057088] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.070 2024/07/22 18:19:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:42.070 [2024-07-22 18:19:54.069103] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.070 [2024-07-22 18:19:54.069139] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.070 2024/07/22 18:19:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:42.070 [2024-07-22 18:19:54.081042] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.070 [2024-07-22 18:19:54.081078] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.070 2024/07/22 18:19:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:42.329 [2024-07-22 18:19:54.093071] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.329 [2024-07-22 18:19:54.093108] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.329 2024/07/22 18:19:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:42.329 [2024-07-22 18:19:54.105089] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.329 [2024-07-22 18:19:54.105127] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.329 2024/07/22 18:19:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:42.329 [2024-07-22 18:19:54.117073] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.329 [2024-07-22 18:19:54.117113] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.329 2024/07/22 18:19:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:42.329 [2024-07-22 18:19:54.129095] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.329 [2024-07-22 18:19:54.129133] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.329 2024/07/22 18:19:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:42.329 [2024-07-22 18:19:54.141211] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.329 [2024-07-22 18:19:54.141263] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.329 2024/07/22 18:19:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:42.329 [2024-07-22 18:19:54.153112] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.329 [2024-07-22 18:19:54.153149] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.329 2024/07/22 18:19:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:42.329 [2024-07-22 18:19:54.165137] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.329 [2024-07-22 18:19:54.165174] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.329 2024/07/22 18:19:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:42.329 [2024-07-22 18:19:54.177143] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.329 [2024-07-22 18:19:54.177178] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.329 2024/07/22 18:19:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:42.329 [2024-07-22 18:19:54.189206] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.329 [2024-07-22 18:19:54.189261] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.329 2024/07/22 18:19:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:42.329 [2024-07-22 18:19:54.201155] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.329 [2024-07-22 18:19:54.201200] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.329 2024/07/22 18:19:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:42.329 [2024-07-22 18:19:54.213135] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.329 [2024-07-22 18:19:54.213174] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.329 2024/07/22 18:19:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:42.329 [2024-07-22 18:19:54.221120] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.329 [2024-07-22 18:19:54.221157] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.329 2024/07/22 18:19:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:42.329 [2024-07-22 18:19:54.233116] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.329 [2024-07-22 18:19:54.233155] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.329 2024/07/22 18:19:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:42.329 [2024-07-22 18:19:54.245168] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.329 [2024-07-22 18:19:54.245209] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.329 2024/07/22 18:19:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:42.329 [2024-07-22 18:19:54.257139] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.329 [2024-07-22 18:19:54.257177] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.329 2024/07/22 18:19:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:42.329 [2024-07-22 18:19:54.269146] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.329 [2024-07-22 18:19:54.269187] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.329 2024/07/22 18:19:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:42.329 [2024-07-22 18:19:54.281163] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.329 [2024-07-22 18:19:54.281205] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.329 2024/07/22 18:19:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:42.329 [2024-07-22 18:19:54.293168] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.329 [2024-07-22 18:19:54.293210] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.329 2024/07/22 18:19:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:42.329 [2024-07-22 18:19:54.305150] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.329 [2024-07-22 18:19:54.305188] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.329 2024/07/22 18:19:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:42.329 [2024-07-22 18:19:54.317233] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.329 [2024-07-22 18:19:54.317279] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.329 2024/07/22 18:19:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:42.329 [2024-07-22 18:19:54.329156] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.329 [2024-07-22 18:19:54.329187] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.329 2024/07/22 18:19:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:42.329 [2024-07-22 18:19:54.337159] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.329 [2024-07-22 18:19:54.337187] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.329 2024/07/22 18:19:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:42.588 [2024-07-22 18:19:54.349187] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.588 [2024-07-22 18:19:54.349227] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.588 2024/07/22 18:19:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:42.588 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (74009) - No such process 00:12:42.588 18:19:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 74009 00:12:42.588 18:19:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:42.588 18:19:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:42.588 18:19:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:42.588 18:19:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:42.588 18:19:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:42.588 18:19:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:42.588 18:19:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:42.588 delay0 00:12:42.588 18:19:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:42.588 18:19:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:12:42.588 18:19:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:42.588 18:19:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:42.588 18:19:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:42.588 18:19:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:12:42.846 [2024-07-22 18:19:54.625759] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:12:49.443 Initializing NVMe Controllers 00:12:49.443 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:49.443 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:12:49.443 Initialization complete. Launching workers. 00:12:49.443 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 66 00:12:49.443 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 353, failed to submit 33 00:12:49.443 success 158, unsuccess 195, failed 0 00:12:49.443 18:20:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:12:49.443 18:20:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:12:49.443 18:20:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:49.443 18:20:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:12:49.443 18:20:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:49.443 18:20:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:12:49.443 18:20:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:49.443 18:20:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:49.443 rmmod nvme_tcp 00:12:49.443 rmmod nvme_fabrics 00:12:49.443 rmmod nvme_keyring 00:12:49.443 18:20:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:49.443 18:20:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:12:49.443 18:20:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:12:49.443 18:20:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 73822 ']' 00:12:49.443 18:20:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 73822 00:12:49.443 18:20:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 73822 ']' 00:12:49.443 18:20:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 73822 00:12:49.443 18:20:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:12:49.443 18:20:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:49.443 18:20:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73822 00:12:49.443 18:20:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:49.443 18:20:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:49.443 killing process with pid 73822 00:12:49.443 18:20:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73822' 00:12:49.443 18:20:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 73822 00:12:49.443 18:20:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 73822 00:12:50.377 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:50.377 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:50.377 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:50.377 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:50.377 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:50.377 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:50.377 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:50.377 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:50.377 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:12:50.637 00:12:50.637 real 0m29.465s 00:12:50.637 user 0m48.231s 00:12:50.637 sys 0m7.061s 00:12:50.637 ************************************ 00:12:50.637 END TEST nvmf_zcopy 00:12:50.637 ************************************ 00:12:50.637 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:50.637 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:50.637 18:20:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:12:50.637 18:20:02 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:12:50.637 18:20:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:50.637 18:20:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:50.637 18:20:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:50.637 ************************************ 00:12:50.637 START TEST nvmf_nmic 00:12:50.637 ************************************ 00:12:50.637 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:12:50.637 * Looking for test storage... 00:12:50.637 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:50.637 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:50.637 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:12:50.637 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:50.637 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:50.637 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:50.637 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:50.637 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:50.637 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:50.637 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:50.637 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:50.637 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:50.637 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:50.637 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:12:50.637 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=0b8484e2-e129-4a11-8748-0b3c728771da 00:12:50.637 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:50.637 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:50.637 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:50.637 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:50.637 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:50.637 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:50.637 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:50.637 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:50.637 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.637 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.637 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.637 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:12:50.637 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.637 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:12:50.637 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:50.637 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:50.637 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:50.637 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:50.637 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:50.637 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:50.637 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:50.637 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:50.637 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:50.637 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:50.637 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:12:50.637 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:50.637 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:50.637 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:50.637 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:50.637 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:50.637 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:50.637 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:50.637 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:50.637 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:12:50.637 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:12:50.637 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:12:50.637 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:12:50.637 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:12:50.637 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # nvmf_veth_init 00:12:50.637 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:50.638 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:50.638 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:50.638 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:12:50.638 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:50.638 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:50.638 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:50.638 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:50.638 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:50.638 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:50.638 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:50.638 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:50.638 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:12:50.638 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:12:50.638 Cannot find device "nvmf_tgt_br" 00:12:50.638 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@155 -- # true 00:12:50.638 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:12:50.638 Cannot find device "nvmf_tgt_br2" 00:12:50.638 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # true 00:12:50.638 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:12:50.638 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:12:50.638 Cannot find device "nvmf_tgt_br" 00:12:50.638 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@158 -- # true 00:12:50.638 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:12:50.638 Cannot find device "nvmf_tgt_br2" 00:12:50.638 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # true 00:12:50.638 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:12:50.897 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:12:50.897 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:50.897 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:50.897 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:12:50.897 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:50.897 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:50.897 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:12:50.897 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:12:50.897 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:50.897 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:50.897 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:50.897 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:50.897 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:50.897 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:50.897 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:50.897 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:50.897 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:12:50.897 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:12:50.897 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:12:50.897 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:12:50.897 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:50.897 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:50.897 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:50.897 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:12:50.897 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:12:50.897 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:12:50.897 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:50.897 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:50.897 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:50.897 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:50.897 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:12:50.897 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:50.897 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.104 ms 00:12:50.897 00:12:50.897 --- 10.0.0.2 ping statistics --- 00:12:50.897 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:50.897 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:12:50.897 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:12:50.897 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:50.897 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:12:50.897 00:12:50.897 --- 10.0.0.3 ping statistics --- 00:12:50.897 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:50.897 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:12:50.897 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:50.897 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:50.897 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:12:50.897 00:12:50.897 --- 10.0.0.1 ping statistics --- 00:12:50.897 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:50.897 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:12:50.897 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:50.897 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@433 -- # return 0 00:12:50.897 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:50.897 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:50.897 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:50.897 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:50.897 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:50.897 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:50.897 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:51.156 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:12:51.156 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:51.156 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:51.156 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:51.156 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=74365 00:12:51.156 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 74365 00:12:51.156 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 74365 ']' 00:12:51.156 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:51.156 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:51.156 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:51.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:51.156 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:51.156 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:51.156 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:51.156 [2024-07-22 18:20:03.048001] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:12:51.156 [2024-07-22 18:20:03.049200] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:51.423 [2024-07-22 18:20:03.236587] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:51.688 [2024-07-22 18:20:03.559012] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:51.688 [2024-07-22 18:20:03.559300] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:51.688 [2024-07-22 18:20:03.559329] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:51.688 [2024-07-22 18:20:03.559346] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:51.688 [2024-07-22 18:20:03.559359] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:51.688 [2024-07-22 18:20:03.559650] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:51.688 [2024-07-22 18:20:03.560253] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:51.688 [2024-07-22 18:20:03.560335] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:51.688 [2024-07-22 18:20:03.560347] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:52.254 18:20:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:52.254 18:20:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:12:52.254 18:20:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:52.254 18:20:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:52.254 18:20:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:52.254 18:20:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:52.254 18:20:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:52.254 18:20:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.254 18:20:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:52.254 [2024-07-22 18:20:04.090887] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:52.254 18:20:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.254 18:20:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:52.254 18:20:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.254 18:20:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:52.254 Malloc0 00:12:52.254 18:20:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.254 18:20:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:52.254 18:20:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.254 18:20:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:52.254 18:20:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.254 18:20:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:52.254 18:20:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.255 18:20:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:52.255 18:20:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.255 18:20:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:52.255 18:20:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.255 18:20:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:52.255 [2024-07-22 18:20:04.201643] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:52.255 18:20:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.255 test case1: single bdev can't be used in multiple subsystems 00:12:52.255 18:20:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:12:52.255 18:20:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:12:52.255 18:20:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.255 18:20:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:52.255 18:20:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.255 18:20:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:12:52.255 18:20:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.255 18:20:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:52.255 18:20:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.255 18:20:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:12:52.255 18:20:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:12:52.255 18:20:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.255 18:20:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:52.255 [2024-07-22 18:20:04.229415] bdev.c:8111:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:12:52.255 [2024-07-22 18:20:04.229645] subsystem.c:2087:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:12:52.255 [2024-07-22 18:20:04.229687] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.255 2024/07/22 18:20:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:Malloc0 no_auto_visible:%!s(bool=false)] nqn:nqn.2016-06.io.spdk:cnode2], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:52.255 request: 00:12:52.255 { 00:12:52.255 "method": "nvmf_subsystem_add_ns", 00:12:52.255 "params": { 00:12:52.255 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:52.255 "namespace": { 00:12:52.255 "bdev_name": "Malloc0", 00:12:52.255 "no_auto_visible": false 00:12:52.255 } 00:12:52.255 } 00:12:52.255 } 00:12:52.255 Got JSON-RPC error response 00:12:52.255 GoRPCClient: error on JSON-RPC call 00:12:52.255 18:20:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:12:52.255 18:20:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:12:52.255 18:20:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:12:52.255 Adding namespace failed - expected result. 00:12:52.255 18:20:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:12:52.255 test case2: host connect to nvmf target in multiple paths 00:12:52.255 18:20:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:12:52.255 18:20:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:12:52.255 18:20:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.255 18:20:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:52.255 [2024-07-22 18:20:04.241655] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:12:52.255 18:20:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.255 18:20:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --hostid=0b8484e2-e129-4a11-8748-0b3c728771da -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:52.513 18:20:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --hostid=0b8484e2-e129-4a11-8748-0b3c728771da -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:12:52.772 18:20:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:12:52.772 18:20:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:12:52.772 18:20:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:52.772 18:20:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:52.772 18:20:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:12:54.673 18:20:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:54.673 18:20:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:54.673 18:20:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:54.673 18:20:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:54.673 18:20:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:54.673 18:20:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:12:54.673 18:20:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:12:54.673 [global] 00:12:54.673 thread=1 00:12:54.673 invalidate=1 00:12:54.673 rw=write 00:12:54.673 time_based=1 00:12:54.673 runtime=1 00:12:54.673 ioengine=libaio 00:12:54.673 direct=1 00:12:54.673 bs=4096 00:12:54.673 iodepth=1 00:12:54.673 norandommap=0 00:12:54.673 numjobs=1 00:12:54.673 00:12:54.673 verify_dump=1 00:12:54.673 verify_backlog=512 00:12:54.673 verify_state_save=0 00:12:54.673 do_verify=1 00:12:54.673 verify=crc32c-intel 00:12:54.673 [job0] 00:12:54.673 filename=/dev/nvme0n1 00:12:54.673 Could not set queue depth (nvme0n1) 00:12:54.932 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:54.932 fio-3.35 00:12:54.932 Starting 1 thread 00:12:55.885 00:12:55.885 job0: (groupid=0, jobs=1): err= 0: pid=74480: Mon Jul 22 18:20:07 2024 00:12:55.885 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:12:55.885 slat (nsec): min=15827, max=62896, avg=19806.75, stdev=4829.41 00:12:55.885 clat (usec): min=190, max=475, avg=228.14, stdev=26.21 00:12:55.885 lat (usec): min=210, max=494, avg=247.95, stdev=26.63 00:12:55.885 clat percentiles (usec): 00:12:55.885 | 1.00th=[ 200], 5.00th=[ 204], 10.00th=[ 206], 20.00th=[ 210], 00:12:55.885 | 30.00th=[ 215], 40.00th=[ 219], 50.00th=[ 223], 60.00th=[ 227], 00:12:55.885 | 70.00th=[ 231], 80.00th=[ 239], 90.00th=[ 258], 95.00th=[ 285], 00:12:55.885 | 99.00th=[ 330], 99.50th=[ 343], 99.90th=[ 396], 99.95th=[ 400], 00:12:55.885 | 99.99th=[ 478] 00:12:55.885 write: IOPS=2426, BW=9706KiB/s (9939kB/s)(9716KiB/1001msec); 0 zone resets 00:12:55.885 slat (usec): min=22, max=178, avg=30.39, stdev= 9.20 00:12:55.885 clat (usec): min=126, max=738, avg=168.21, stdev=27.03 00:12:55.885 lat (usec): min=162, max=761, avg=198.60, stdev=29.64 00:12:55.885 clat percentiles (usec): 00:12:55.885 | 1.00th=[ 143], 5.00th=[ 147], 10.00th=[ 149], 20.00th=[ 153], 00:12:55.885 | 30.00th=[ 157], 40.00th=[ 161], 50.00th=[ 165], 60.00th=[ 167], 00:12:55.885 | 70.00th=[ 172], 80.00th=[ 178], 90.00th=[ 188], 95.00th=[ 200], 00:12:55.885 | 99.00th=[ 273], 99.50th=[ 289], 99.90th=[ 396], 99.95th=[ 603], 00:12:55.885 | 99.99th=[ 742] 00:12:55.885 bw ( KiB/s): min= 9568, max= 9568, per=98.58%, avg=9568.00, stdev= 0.00, samples=1 00:12:55.885 iops : min= 2392, max= 2392, avg=2392.00, stdev= 0.00, samples=1 00:12:55.885 lat (usec) : 250=93.48%, 500=6.48%, 750=0.04% 00:12:55.885 cpu : usr=2.00%, sys=8.60%, ctx=4477, majf=0, minf=2 00:12:55.885 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:55.885 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:55.885 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:55.885 issued rwts: total=2048,2429,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:55.885 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:55.885 00:12:55.885 Run status group 0 (all jobs): 00:12:55.885 READ: bw=8184KiB/s (8380kB/s), 8184KiB/s-8184KiB/s (8380kB/s-8380kB/s), io=8192KiB (8389kB), run=1001-1001msec 00:12:55.885 WRITE: bw=9706KiB/s (9939kB/s), 9706KiB/s-9706KiB/s (9939kB/s-9939kB/s), io=9716KiB (9949kB), run=1001-1001msec 00:12:55.885 00:12:55.885 Disk stats (read/write): 00:12:55.885 nvme0n1: ios=1980/2048, merge=0/0, ticks=480/378, in_queue=858, util=91.78% 00:12:55.885 18:20:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:56.144 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:12:56.144 18:20:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:56.144 18:20:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:12:56.144 18:20:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:56.144 18:20:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:56.144 18:20:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:56.144 18:20:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:56.144 18:20:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:12:56.144 18:20:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:12:56.144 18:20:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:12:56.144 18:20:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:56.144 18:20:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:12:56.144 18:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:56.144 18:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:12:56.144 18:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:56.144 18:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:56.144 rmmod nvme_tcp 00:12:56.144 rmmod nvme_fabrics 00:12:56.144 rmmod nvme_keyring 00:12:56.144 18:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:56.144 18:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:12:56.144 18:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:12:56.144 18:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 74365 ']' 00:12:56.144 18:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 74365 00:12:56.144 18:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 74365 ']' 00:12:56.144 18:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 74365 00:12:56.144 18:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:12:56.144 18:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:56.144 18:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74365 00:12:56.144 18:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:56.144 killing process with pid 74365 00:12:56.144 18:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:56.144 18:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74365' 00:12:56.144 18:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 74365 00:12:56.144 18:20:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 74365 00:12:58.048 18:20:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:58.048 18:20:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:58.048 18:20:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:58.048 18:20:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:58.048 18:20:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:58.048 18:20:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:58.048 18:20:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:58.048 18:20:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:58.048 18:20:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:12:58.048 00:12:58.048 real 0m7.120s 00:12:58.048 user 0m22.428s 00:12:58.048 sys 0m1.422s 00:12:58.048 18:20:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:58.048 18:20:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:58.048 ************************************ 00:12:58.048 END TEST nvmf_nmic 00:12:58.048 ************************************ 00:12:58.048 18:20:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:12:58.048 18:20:09 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:12:58.048 18:20:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:58.048 18:20:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:58.048 18:20:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:58.048 ************************************ 00:12:58.048 START TEST nvmf_fio_target 00:12:58.048 ************************************ 00:12:58.048 18:20:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:12:58.048 * Looking for test storage... 00:12:58.048 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:58.048 18:20:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:58.048 18:20:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:12:58.048 18:20:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:58.048 18:20:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:58.048 18:20:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:58.048 18:20:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:58.048 18:20:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:58.048 18:20:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:58.048 18:20:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:58.048 18:20:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:58.048 18:20:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:58.048 18:20:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:58.048 18:20:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:12:58.048 18:20:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=0b8484e2-e129-4a11-8748-0b3c728771da 00:12:58.048 18:20:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:58.048 18:20:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:58.048 18:20:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:58.048 18:20:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:58.048 18:20:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:58.048 18:20:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:58.048 18:20:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:58.048 18:20:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:58.048 18:20:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.048 18:20:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.048 18:20:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.048 18:20:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:12:58.048 18:20:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.048 18:20:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:12:58.048 18:20:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:58.048 18:20:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:58.049 18:20:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:58.049 18:20:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:58.049 18:20:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:58.049 18:20:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:58.049 18:20:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:58.049 18:20:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:58.049 18:20:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:58.049 18:20:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:58.049 18:20:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:58.049 18:20:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:12:58.049 18:20:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:58.049 18:20:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:58.049 18:20:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:58.049 18:20:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:58.049 18:20:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:58.049 18:20:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:58.049 18:20:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:58.049 18:20:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:58.049 18:20:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:12:58.049 18:20:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:12:58.049 18:20:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:12:58.049 18:20:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:12:58.049 18:20:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:12:58.049 18:20:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:12:58.049 18:20:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:58.049 18:20:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:58.049 18:20:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:58.049 18:20:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:12:58.049 18:20:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:58.049 18:20:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:58.049 18:20:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:58.049 18:20:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:58.049 18:20:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:58.049 18:20:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:58.049 18:20:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:58.049 18:20:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:58.049 18:20:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:12:58.049 18:20:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:12:58.049 Cannot find device "nvmf_tgt_br" 00:12:58.049 18:20:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@155 -- # true 00:12:58.049 18:20:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:12:58.049 Cannot find device "nvmf_tgt_br2" 00:12:58.049 18:20:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # true 00:12:58.049 18:20:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:12:58.049 18:20:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:12:58.049 Cannot find device "nvmf_tgt_br" 00:12:58.049 18:20:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@158 -- # true 00:12:58.049 18:20:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:12:58.049 Cannot find device "nvmf_tgt_br2" 00:12:58.049 18:20:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # true 00:12:58.049 18:20:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:12:58.049 18:20:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:12:58.049 18:20:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:58.049 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:58.049 18:20:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:12:58.049 18:20:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:58.049 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:58.049 18:20:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:12:58.049 18:20:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:12:58.049 18:20:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:58.049 18:20:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:58.049 18:20:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:58.049 18:20:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:58.049 18:20:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:58.049 18:20:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:58.049 18:20:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:58.049 18:20:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:58.049 18:20:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:12:58.049 18:20:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:12:58.049 18:20:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:12:58.049 18:20:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:12:58.049 18:20:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:58.049 18:20:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:58.049 18:20:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:58.049 18:20:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:12:58.049 18:20:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:12:58.049 18:20:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:12:58.049 18:20:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:58.049 18:20:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:58.308 18:20:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:58.308 18:20:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:58.308 18:20:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:12:58.308 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:58.308 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.117 ms 00:12:58.308 00:12:58.308 --- 10.0.0.2 ping statistics --- 00:12:58.308 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:58.308 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:12:58.308 18:20:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:12:58.308 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:58.308 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.098 ms 00:12:58.308 00:12:58.308 --- 10.0.0.3 ping statistics --- 00:12:58.308 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:58.308 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:12:58.308 18:20:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:58.308 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:58.308 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:12:58.308 00:12:58.308 --- 10.0.0.1 ping statistics --- 00:12:58.308 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:58.308 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:12:58.308 18:20:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:58.308 18:20:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@433 -- # return 0 00:12:58.308 18:20:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:58.308 18:20:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:58.308 18:20:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:58.308 18:20:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:58.308 18:20:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:58.308 18:20:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:58.308 18:20:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:58.308 18:20:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:12:58.308 18:20:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:58.308 18:20:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:58.308 18:20:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:58.308 18:20:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=74674 00:12:58.308 18:20:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:58.308 18:20:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 74674 00:12:58.308 18:20:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 74674 ']' 00:12:58.308 18:20:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:58.309 18:20:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:58.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:58.309 18:20:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:58.309 18:20:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:58.309 18:20:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:58.309 [2024-07-22 18:20:10.251779] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:12:58.309 [2024-07-22 18:20:10.252012] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:58.567 [2024-07-22 18:20:10.431460] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:58.826 [2024-07-22 18:20:10.707215] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:58.826 [2024-07-22 18:20:10.707304] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:58.826 [2024-07-22 18:20:10.707323] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:58.826 [2024-07-22 18:20:10.707339] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:58.826 [2024-07-22 18:20:10.707351] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:58.826 [2024-07-22 18:20:10.707591] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:58.826 [2024-07-22 18:20:10.708316] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:58.826 [2024-07-22 18:20:10.708501] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:58.826 [2024-07-22 18:20:10.708515] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:59.394 18:20:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:59.394 18:20:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:12:59.394 18:20:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:59.394 18:20:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:59.394 18:20:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:59.394 18:20:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:59.394 18:20:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:59.652 [2024-07-22 18:20:11.492639] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:59.652 18:20:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:59.909 18:20:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:12:59.909 18:20:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:00.473 18:20:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:13:00.473 18:20:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:00.732 18:20:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:13:00.732 18:20:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:00.990 18:20:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:13:00.990 18:20:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:13:01.249 18:20:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:01.815 18:20:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:13:01.815 18:20:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:02.073 18:20:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:13:02.073 18:20:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:02.331 18:20:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:13:02.331 18:20:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:13:02.589 18:20:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:03.154 18:20:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:13:03.154 18:20:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:03.154 18:20:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:13:03.154 18:20:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:03.719 18:20:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:03.719 [2024-07-22 18:20:15.705708] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:03.719 18:20:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:13:03.977 18:20:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:13:04.234 18:20:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --hostid=0b8484e2-e129-4a11-8748-0b3c728771da -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:04.492 18:20:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:13:04.492 18:20:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:13:04.492 18:20:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:04.492 18:20:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:13:04.492 18:20:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:13:04.492 18:20:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:13:06.393 18:20:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:06.393 18:20:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:06.393 18:20:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:06.393 18:20:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:13:06.393 18:20:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:06.393 18:20:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:13:06.393 18:20:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:13:06.652 [global] 00:13:06.652 thread=1 00:13:06.652 invalidate=1 00:13:06.652 rw=write 00:13:06.653 time_based=1 00:13:06.653 runtime=1 00:13:06.653 ioengine=libaio 00:13:06.653 direct=1 00:13:06.653 bs=4096 00:13:06.653 iodepth=1 00:13:06.653 norandommap=0 00:13:06.653 numjobs=1 00:13:06.653 00:13:06.653 verify_dump=1 00:13:06.653 verify_backlog=512 00:13:06.653 verify_state_save=0 00:13:06.653 do_verify=1 00:13:06.653 verify=crc32c-intel 00:13:06.653 [job0] 00:13:06.653 filename=/dev/nvme0n1 00:13:06.653 [job1] 00:13:06.653 filename=/dev/nvme0n2 00:13:06.653 [job2] 00:13:06.653 filename=/dev/nvme0n3 00:13:06.653 [job3] 00:13:06.653 filename=/dev/nvme0n4 00:13:06.653 Could not set queue depth (nvme0n1) 00:13:06.653 Could not set queue depth (nvme0n2) 00:13:06.653 Could not set queue depth (nvme0n3) 00:13:06.653 Could not set queue depth (nvme0n4) 00:13:06.653 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:06.653 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:06.653 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:06.653 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:06.653 fio-3.35 00:13:06.653 Starting 4 threads 00:13:08.030 00:13:08.030 job0: (groupid=0, jobs=1): err= 0: pid=74973: Mon Jul 22 18:20:19 2024 00:13:08.030 read: IOPS=1343, BW=5375KiB/s (5504kB/s)(5380KiB/1001msec) 00:13:08.030 slat (nsec): min=12761, max=55337, avg=18539.63, stdev=5285.35 00:13:08.030 clat (usec): min=194, max=7527, avg=371.73, stdev=268.23 00:13:08.030 lat (usec): min=213, max=7547, avg=390.27, stdev=267.84 00:13:08.030 clat percentiles (usec): 00:13:08.030 | 1.00th=[ 212], 5.00th=[ 233], 10.00th=[ 243], 20.00th=[ 265], 00:13:08.030 | 30.00th=[ 289], 40.00th=[ 359], 50.00th=[ 388], 60.00th=[ 396], 00:13:08.030 | 70.00th=[ 408], 80.00th=[ 420], 90.00th=[ 437], 95.00th=[ 469], 00:13:08.030 | 99.00th=[ 578], 99.50th=[ 1221], 99.90th=[ 3752], 99.95th=[ 7504], 00:13:08.030 | 99.99th=[ 7504] 00:13:08.030 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:13:08.030 slat (usec): min=13, max=148, avg=26.93, stdev= 7.27 00:13:08.030 clat (usec): min=149, max=490, avg=278.43, stdev=60.95 00:13:08.030 lat (usec): min=177, max=571, avg=305.35, stdev=59.10 00:13:08.030 clat percentiles (usec): 00:13:08.030 | 1.00th=[ 165], 5.00th=[ 186], 10.00th=[ 198], 20.00th=[ 215], 00:13:08.030 | 30.00th=[ 233], 40.00th=[ 265], 50.00th=[ 289], 60.00th=[ 302], 00:13:08.030 | 70.00th=[ 314], 80.00th=[ 330], 90.00th=[ 351], 95.00th=[ 371], 00:13:08.030 | 99.00th=[ 441], 99.50th=[ 465], 99.90th=[ 482], 99.95th=[ 490], 00:13:08.030 | 99.99th=[ 490] 00:13:08.030 bw ( KiB/s): min= 8192, max= 8192, per=29.08%, avg=8192.00, stdev= 0.00, samples=1 00:13:08.030 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:13:08.030 lat (usec) : 250=25.34%, 500=72.93%, 750=1.49% 00:13:08.030 lat (msec) : 2=0.03%, 4=0.17%, 10=0.03% 00:13:08.030 cpu : usr=1.40%, sys=5.00%, ctx=2891, majf=0, minf=9 00:13:08.030 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:08.030 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:08.030 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:08.030 issued rwts: total=1345,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:08.030 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:08.030 job1: (groupid=0, jobs=1): err= 0: pid=74974: Mon Jul 22 18:20:19 2024 00:13:08.030 read: IOPS=1406, BW=5626KiB/s (5761kB/s)(5632KiB/1001msec) 00:13:08.030 slat (nsec): min=12737, max=91934, avg=18244.43, stdev=5908.21 00:13:08.030 clat (usec): min=198, max=2325, avg=359.92, stdev=98.68 00:13:08.030 lat (usec): min=217, max=2344, avg=378.16, stdev=96.71 00:13:08.030 clat percentiles (usec): 00:13:08.030 | 1.00th=[ 217], 5.00th=[ 229], 10.00th=[ 243], 20.00th=[ 265], 00:13:08.030 | 30.00th=[ 289], 40.00th=[ 363], 50.00th=[ 388], 60.00th=[ 400], 00:13:08.030 | 70.00th=[ 408], 80.00th=[ 420], 90.00th=[ 445], 95.00th=[ 490], 00:13:08.030 | 99.00th=[ 562], 99.50th=[ 570], 99.90th=[ 603], 99.95th=[ 2311], 00:13:08.030 | 99.99th=[ 2311] 00:13:08.030 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:13:08.030 slat (usec): min=13, max=143, avg=26.29, stdev= 6.29 00:13:08.030 clat (usec): min=150, max=502, avg=274.23, stdev=62.45 00:13:08.030 lat (usec): min=173, max=566, avg=300.52, stdev=61.94 00:13:08.030 clat percentiles (usec): 00:13:08.030 | 1.00th=[ 161], 5.00th=[ 180], 10.00th=[ 196], 20.00th=[ 212], 00:13:08.030 | 30.00th=[ 229], 40.00th=[ 251], 50.00th=[ 281], 60.00th=[ 297], 00:13:08.030 | 70.00th=[ 310], 80.00th=[ 326], 90.00th=[ 351], 95.00th=[ 371], 00:13:08.030 | 99.00th=[ 429], 99.50th=[ 457], 99.90th=[ 502], 99.95th=[ 502], 00:13:08.030 | 99.99th=[ 502] 00:13:08.030 bw ( KiB/s): min= 8192, max= 8192, per=29.08%, avg=8192.00, stdev= 0.00, samples=1 00:13:08.030 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:13:08.030 lat (usec) : 250=27.45%, 500=70.55%, 750=1.97% 00:13:08.030 lat (msec) : 4=0.03% 00:13:08.030 cpu : usr=1.70%, sys=4.70%, ctx=2948, majf=0, minf=11 00:13:08.030 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:08.030 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:08.030 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:08.030 issued rwts: total=1408,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:08.030 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:08.030 job2: (groupid=0, jobs=1): err= 0: pid=74975: Mon Jul 22 18:20:19 2024 00:13:08.030 read: IOPS=1833, BW=7333KiB/s (7509kB/s)(7340KiB/1001msec) 00:13:08.030 slat (nsec): min=13941, max=83293, avg=23189.98, stdev=7895.65 00:13:08.030 clat (usec): min=202, max=566, avg=254.14, stdev=27.36 00:13:08.030 lat (usec): min=219, max=585, avg=277.33, stdev=29.29 00:13:08.030 clat percentiles (usec): 00:13:08.030 | 1.00th=[ 212], 5.00th=[ 221], 10.00th=[ 225], 20.00th=[ 233], 00:13:08.030 | 30.00th=[ 239], 40.00th=[ 245], 50.00th=[ 251], 60.00th=[ 258], 00:13:08.030 | 70.00th=[ 265], 80.00th=[ 273], 90.00th=[ 289], 95.00th=[ 306], 00:13:08.030 | 99.00th=[ 330], 99.50th=[ 343], 99.90th=[ 469], 99.95th=[ 570], 00:13:08.030 | 99.99th=[ 570] 00:13:08.030 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:13:08.030 slat (usec): min=19, max=133, avg=29.99, stdev=10.82 00:13:08.030 clat (usec): min=156, max=896, avg=205.26, stdev=30.85 00:13:08.030 lat (usec): min=181, max=922, avg=235.25, stdev=34.71 00:13:08.030 clat percentiles (usec): 00:13:08.030 | 1.00th=[ 163], 5.00th=[ 169], 10.00th=[ 176], 20.00th=[ 182], 00:13:08.030 | 30.00th=[ 188], 40.00th=[ 194], 50.00th=[ 202], 60.00th=[ 208], 00:13:08.030 | 70.00th=[ 217], 80.00th=[ 225], 90.00th=[ 241], 95.00th=[ 255], 00:13:08.030 | 99.00th=[ 273], 99.50th=[ 285], 99.90th=[ 351], 99.95th=[ 502], 00:13:08.030 | 99.99th=[ 898] 00:13:08.030 bw ( KiB/s): min= 8192, max= 8192, per=29.08%, avg=8192.00, stdev= 0.00, samples=1 00:13:08.030 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:13:08.030 lat (usec) : 250=72.70%, 500=27.22%, 750=0.05%, 1000=0.03% 00:13:08.030 cpu : usr=1.50%, sys=8.50%, ctx=3883, majf=0, minf=5 00:13:08.030 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:08.030 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:08.030 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:08.030 issued rwts: total=1835,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:08.030 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:08.030 job3: (groupid=0, jobs=1): err= 0: pid=74976: Mon Jul 22 18:20:19 2024 00:13:08.030 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:13:08.030 slat (nsec): min=17264, max=64630, avg=21250.36, stdev=3743.04 00:13:08.030 clat (usec): min=230, max=439, avg=285.03, stdev=22.19 00:13:08.030 lat (usec): min=248, max=465, avg=306.28, stdev=22.84 00:13:08.030 clat percentiles (usec): 00:13:08.030 | 1.00th=[ 241], 5.00th=[ 251], 10.00th=[ 258], 20.00th=[ 269], 00:13:08.030 | 30.00th=[ 273], 40.00th=[ 277], 50.00th=[ 285], 60.00th=[ 289], 00:13:08.030 | 70.00th=[ 297], 80.00th=[ 306], 90.00th=[ 314], 95.00th=[ 326], 00:13:08.030 | 99.00th=[ 338], 99.50th=[ 343], 99.90th=[ 363], 99.95th=[ 441], 00:13:08.030 | 99.99th=[ 441] 00:13:08.030 write: IOPS=1928, BW=7712KiB/s (7897kB/s)(7720KiB/1001msec); 0 zone resets 00:13:08.030 slat (usec): min=25, max=187, avg=33.47, stdev= 7.73 00:13:08.030 clat (usec): min=188, max=2777, avg=236.88, stdev=65.19 00:13:08.030 lat (usec): min=216, max=2806, avg=270.35, stdev=66.02 00:13:08.030 clat percentiles (usec): 00:13:08.030 | 1.00th=[ 196], 5.00th=[ 206], 10.00th=[ 212], 20.00th=[ 219], 00:13:08.030 | 30.00th=[ 223], 40.00th=[ 229], 50.00th=[ 233], 60.00th=[ 239], 00:13:08.030 | 70.00th=[ 245], 80.00th=[ 251], 90.00th=[ 262], 95.00th=[ 269], 00:13:08.030 | 99.00th=[ 289], 99.50th=[ 297], 99.90th=[ 865], 99.95th=[ 2769], 00:13:08.030 | 99.99th=[ 2769] 00:13:08.030 bw ( KiB/s): min= 8192, max= 8192, per=29.08%, avg=8192.00, stdev= 0.00, samples=1 00:13:08.030 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:13:08.030 lat (usec) : 250=45.50%, 500=54.39%, 750=0.03%, 1000=0.06% 00:13:08.030 lat (msec) : 4=0.03% 00:13:08.030 cpu : usr=1.60%, sys=7.60%, ctx=3473, majf=0, minf=10 00:13:08.030 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:08.030 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:08.030 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:08.030 issued rwts: total=1536,1930,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:08.030 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:08.030 00:13:08.031 Run status group 0 (all jobs): 00:13:08.031 READ: bw=23.9MiB/s (25.1MB/s), 5375KiB/s-7333KiB/s (5504kB/s-7509kB/s), io=23.9MiB (25.1MB), run=1001-1001msec 00:13:08.031 WRITE: bw=27.5MiB/s (28.8MB/s), 6138KiB/s-8184KiB/s (6285kB/s-8380kB/s), io=27.5MiB (28.9MB), run=1001-1001msec 00:13:08.031 00:13:08.031 Disk stats (read/write): 00:13:08.031 nvme0n1: ios=1073/1524, merge=0/0, ticks=392/428, in_queue=820, util=87.05% 00:13:08.031 nvme0n2: ios=1126/1536, merge=0/0, ticks=473/430, in_queue=903, util=93.08% 00:13:08.031 nvme0n3: ios=1592/1777, merge=0/0, ticks=505/389, in_queue=894, util=92.94% 00:13:08.031 nvme0n4: ios=1426/1536, merge=0/0, ticks=422/377, in_queue=799, util=89.76% 00:13:08.031 18:20:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:13:08.031 [global] 00:13:08.031 thread=1 00:13:08.031 invalidate=1 00:13:08.031 rw=randwrite 00:13:08.031 time_based=1 00:13:08.031 runtime=1 00:13:08.031 ioengine=libaio 00:13:08.031 direct=1 00:13:08.031 bs=4096 00:13:08.031 iodepth=1 00:13:08.031 norandommap=0 00:13:08.031 numjobs=1 00:13:08.031 00:13:08.031 verify_dump=1 00:13:08.031 verify_backlog=512 00:13:08.031 verify_state_save=0 00:13:08.031 do_verify=1 00:13:08.031 verify=crc32c-intel 00:13:08.031 [job0] 00:13:08.031 filename=/dev/nvme0n1 00:13:08.031 [job1] 00:13:08.031 filename=/dev/nvme0n2 00:13:08.031 [job2] 00:13:08.031 filename=/dev/nvme0n3 00:13:08.031 [job3] 00:13:08.031 filename=/dev/nvme0n4 00:13:08.031 Could not set queue depth (nvme0n1) 00:13:08.031 Could not set queue depth (nvme0n2) 00:13:08.031 Could not set queue depth (nvme0n3) 00:13:08.031 Could not set queue depth (nvme0n4) 00:13:08.031 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:08.031 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:08.031 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:08.031 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:08.031 fio-3.35 00:13:08.031 Starting 4 threads 00:13:09.407 00:13:09.408 job0: (groupid=0, jobs=1): err= 0: pid=75035: Mon Jul 22 18:20:21 2024 00:13:09.408 read: IOPS=1699, BW=6797KiB/s (6960kB/s)(6804KiB/1001msec) 00:13:09.408 slat (nsec): min=14510, max=51369, avg=18934.21, stdev=4182.66 00:13:09.408 clat (usec): min=195, max=432, avg=267.94, stdev=39.63 00:13:09.408 lat (usec): min=214, max=449, avg=286.88, stdev=41.08 00:13:09.408 clat percentiles (usec): 00:13:09.408 | 1.00th=[ 206], 5.00th=[ 215], 10.00th=[ 223], 20.00th=[ 233], 00:13:09.408 | 30.00th=[ 241], 40.00th=[ 249], 50.00th=[ 262], 60.00th=[ 273], 00:13:09.408 | 70.00th=[ 289], 80.00th=[ 306], 90.00th=[ 326], 95.00th=[ 343], 00:13:09.408 | 99.00th=[ 359], 99.50th=[ 367], 99.90th=[ 433], 99.95th=[ 433], 00:13:09.408 | 99.99th=[ 433] 00:13:09.408 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:13:09.408 slat (usec): min=21, max=123, avg=28.67, stdev= 8.85 00:13:09.408 clat (usec): min=145, max=735, avg=217.75, stdev=49.23 00:13:09.408 lat (usec): min=168, max=760, avg=246.42, stdev=52.86 00:13:09.408 clat percentiles (usec): 00:13:09.408 | 1.00th=[ 155], 5.00th=[ 165], 10.00th=[ 176], 20.00th=[ 184], 00:13:09.408 | 30.00th=[ 192], 40.00th=[ 200], 50.00th=[ 208], 60.00th=[ 217], 00:13:09.408 | 70.00th=[ 225], 80.00th=[ 239], 90.00th=[ 269], 95.00th=[ 310], 00:13:09.408 | 99.00th=[ 396], 99.50th=[ 412], 99.90th=[ 594], 99.95th=[ 619], 00:13:09.408 | 99.99th=[ 734] 00:13:09.408 bw ( KiB/s): min= 8192, max= 8192, per=31.23%, avg=8192.00, stdev= 0.00, samples=1 00:13:09.408 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:13:09.408 lat (usec) : 250=65.06%, 500=34.73%, 750=0.21% 00:13:09.408 cpu : usr=1.60%, sys=6.80%, ctx=3750, majf=0, minf=9 00:13:09.408 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:09.408 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:09.408 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:09.408 issued rwts: total=1701,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:09.408 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:09.408 job1: (groupid=0, jobs=1): err= 0: pid=75036: Mon Jul 22 18:20:21 2024 00:13:09.408 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:13:09.408 slat (nsec): min=13705, max=66144, avg=20250.57, stdev=6945.48 00:13:09.408 clat (usec): min=302, max=681, avg=478.76, stdev=35.91 00:13:09.408 lat (usec): min=317, max=698, avg=499.01, stdev=35.90 00:13:09.408 clat percentiles (usec): 00:13:09.408 | 1.00th=[ 416], 5.00th=[ 437], 10.00th=[ 449], 20.00th=[ 457], 00:13:09.408 | 30.00th=[ 461], 40.00th=[ 469], 50.00th=[ 474], 60.00th=[ 482], 00:13:09.408 | 70.00th=[ 486], 80.00th=[ 498], 90.00th=[ 510], 95.00th=[ 529], 00:13:09.408 | 99.00th=[ 619], 99.50th=[ 635], 99.90th=[ 668], 99.95th=[ 685], 00:13:09.408 | 99.99th=[ 685] 00:13:09.408 write: IOPS=1200, BW=4803KiB/s (4918kB/s)(4808KiB/1001msec); 0 zone resets 00:13:09.408 slat (nsec): min=14934, max=78783, avg=32514.93, stdev=8633.87 00:13:09.408 clat (usec): min=130, max=7573, avg=369.76, stdev=222.01 00:13:09.408 lat (usec): min=159, max=7624, avg=402.28, stdev=222.28 00:13:09.408 clat percentiles (usec): 00:13:09.408 | 1.00th=[ 204], 5.00th=[ 260], 10.00th=[ 289], 20.00th=[ 326], 00:13:09.408 | 30.00th=[ 347], 40.00th=[ 359], 50.00th=[ 371], 60.00th=[ 379], 00:13:09.408 | 70.00th=[ 388], 80.00th=[ 400], 90.00th=[ 416], 95.00th=[ 441], 00:13:09.408 | 99.00th=[ 506], 99.50th=[ 586], 99.90th=[ 2114], 99.95th=[ 7570], 00:13:09.408 | 99.99th=[ 7570] 00:13:09.408 bw ( KiB/s): min= 4849, max= 4849, per=18.49%, avg=4849.00, stdev= 0.00, samples=1 00:13:09.408 iops : min= 1212, max= 1212, avg=1212.00, stdev= 0.00, samples=1 00:13:09.408 lat (usec) : 250=2.47%, 500=89.40%, 750=7.95%, 1000=0.09% 00:13:09.408 lat (msec) : 4=0.04%, 10=0.04% 00:13:09.408 cpu : usr=1.40%, sys=4.60%, ctx=2226, majf=0, minf=12 00:13:09.408 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:09.408 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:09.408 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:09.408 issued rwts: total=1024,1202,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:09.408 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:09.408 job2: (groupid=0, jobs=1): err= 0: pid=75037: Mon Jul 22 18:20:21 2024 00:13:09.408 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:13:09.408 slat (nsec): min=13944, max=50983, avg=21416.36, stdev=5354.65 00:13:09.408 clat (usec): min=333, max=670, avg=477.38, stdev=38.52 00:13:09.408 lat (usec): min=347, max=695, avg=498.80, stdev=38.54 00:13:09.408 clat percentiles (usec): 00:13:09.408 | 1.00th=[ 400], 5.00th=[ 429], 10.00th=[ 437], 20.00th=[ 449], 00:13:09.408 | 30.00th=[ 461], 40.00th=[ 465], 50.00th=[ 474], 60.00th=[ 482], 00:13:09.408 | 70.00th=[ 490], 80.00th=[ 498], 90.00th=[ 515], 95.00th=[ 545], 00:13:09.408 | 99.00th=[ 619], 99.50th=[ 635], 99.90th=[ 668], 99.95th=[ 668], 00:13:09.408 | 99.99th=[ 668] 00:13:09.408 write: IOPS=1264, BW=5059KiB/s (5180kB/s)(5064KiB/1001msec); 0 zone resets 00:13:09.408 slat (usec): min=15, max=152, avg=33.00, stdev= 9.00 00:13:09.408 clat (usec): min=203, max=1025, avg=349.08, stdev=70.15 00:13:09.408 lat (usec): min=238, max=1053, avg=382.09, stdev=68.63 00:13:09.408 clat percentiles (usec): 00:13:09.408 | 1.00th=[ 221], 5.00th=[ 231], 10.00th=[ 241], 20.00th=[ 265], 00:13:09.408 | 30.00th=[ 334], 40.00th=[ 355], 50.00th=[ 367], 60.00th=[ 375], 00:13:09.408 | 70.00th=[ 388], 80.00th=[ 400], 90.00th=[ 420], 95.00th=[ 441], 00:13:09.408 | 99.00th=[ 490], 99.50th=[ 502], 99.90th=[ 570], 99.95th=[ 1029], 00:13:09.408 | 99.99th=[ 1029] 00:13:09.408 bw ( KiB/s): min= 5448, max= 5448, per=20.77%, avg=5448.00, stdev= 0.00, samples=1 00:13:09.408 iops : min= 1362, max= 1362, avg=1362.00, stdev= 0.00, samples=1 00:13:09.408 lat (usec) : 250=8.25%, 500=83.32%, 750=8.38% 00:13:09.408 lat (msec) : 2=0.04% 00:13:09.408 cpu : usr=1.40%, sys=4.80%, ctx=2290, majf=0, minf=17 00:13:09.408 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:09.408 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:09.408 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:09.408 issued rwts: total=1024,1266,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:09.408 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:09.408 job3: (groupid=0, jobs=1): err= 0: pid=75038: Mon Jul 22 18:20:21 2024 00:13:09.408 read: IOPS=2008, BW=8036KiB/s (8229kB/s)(8044KiB/1001msec) 00:13:09.408 slat (nsec): min=15042, max=34162, avg=17318.98, stdev=2188.63 00:13:09.408 clat (usec): min=204, max=388, avg=247.77, stdev=20.35 00:13:09.408 lat (usec): min=221, max=406, avg=265.09, stdev=20.55 00:13:09.408 clat percentiles (usec): 00:13:09.408 | 1.00th=[ 215], 5.00th=[ 223], 10.00th=[ 227], 20.00th=[ 231], 00:13:09.408 | 30.00th=[ 237], 40.00th=[ 241], 50.00th=[ 245], 60.00th=[ 249], 00:13:09.408 | 70.00th=[ 255], 80.00th=[ 262], 90.00th=[ 273], 95.00th=[ 285], 00:13:09.408 | 99.00th=[ 314], 99.50th=[ 343], 99.90th=[ 359], 99.95th=[ 367], 00:13:09.408 | 99.99th=[ 388] 00:13:09.408 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:13:09.408 slat (nsec): min=21170, max=85787, avg=25308.40, stdev=4578.48 00:13:09.408 clat (usec): min=156, max=2039, avg=199.27, stdev=46.98 00:13:09.408 lat (usec): min=179, max=2063, avg=224.57, stdev=47.48 00:13:09.408 clat percentiles (usec): 00:13:09.408 | 1.00th=[ 163], 5.00th=[ 169], 10.00th=[ 174], 20.00th=[ 182], 00:13:09.408 | 30.00th=[ 188], 40.00th=[ 192], 50.00th=[ 196], 60.00th=[ 202], 00:13:09.408 | 70.00th=[ 206], 80.00th=[ 215], 90.00th=[ 225], 95.00th=[ 237], 00:13:09.408 | 99.00th=[ 260], 99.50th=[ 269], 99.90th=[ 392], 99.95th=[ 685], 00:13:09.408 | 99.99th=[ 2040] 00:13:09.408 bw ( KiB/s): min= 8192, max= 8192, per=31.23%, avg=8192.00, stdev= 0.00, samples=1 00:13:09.409 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:13:09.409 lat (usec) : 250=79.45%, 500=20.50%, 750=0.02% 00:13:09.409 lat (msec) : 4=0.02% 00:13:09.409 cpu : usr=1.70%, sys=6.20%, ctx=4059, majf=0, minf=7 00:13:09.409 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:09.409 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:09.409 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:09.409 issued rwts: total=2011,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:09.409 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:09.409 00:13:09.409 Run status group 0 (all jobs): 00:13:09.409 READ: bw=22.5MiB/s (23.6MB/s), 4092KiB/s-8036KiB/s (4190kB/s-8229kB/s), io=22.5MiB (23.6MB), run=1001-1001msec 00:13:09.409 WRITE: bw=25.6MiB/s (26.9MB/s), 4803KiB/s-8184KiB/s (4918kB/s-8380kB/s), io=25.6MiB (26.9MB), run=1001-1001msec 00:13:09.409 00:13:09.409 Disk stats (read/write): 00:13:09.409 nvme0n1: ios=1586/1641, merge=0/0, ticks=443/375, in_queue=818, util=89.08% 00:13:09.409 nvme0n2: ios=959/1024, merge=0/0, ticks=499/383, in_queue=882, util=90.82% 00:13:09.409 nvme0n3: ios=1001/1024, merge=0/0, ticks=558/362, in_queue=920, util=91.40% 00:13:09.409 nvme0n4: ios=1536/2037, merge=0/0, ticks=387/429, in_queue=816, util=89.81% 00:13:09.409 18:20:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:13:09.409 [global] 00:13:09.409 thread=1 00:13:09.409 invalidate=1 00:13:09.409 rw=write 00:13:09.409 time_based=1 00:13:09.409 runtime=1 00:13:09.409 ioengine=libaio 00:13:09.409 direct=1 00:13:09.409 bs=4096 00:13:09.409 iodepth=128 00:13:09.409 norandommap=0 00:13:09.409 numjobs=1 00:13:09.409 00:13:09.409 verify_dump=1 00:13:09.409 verify_backlog=512 00:13:09.409 verify_state_save=0 00:13:09.409 do_verify=1 00:13:09.409 verify=crc32c-intel 00:13:09.409 [job0] 00:13:09.409 filename=/dev/nvme0n1 00:13:09.409 [job1] 00:13:09.409 filename=/dev/nvme0n2 00:13:09.409 [job2] 00:13:09.409 filename=/dev/nvme0n3 00:13:09.409 [job3] 00:13:09.409 filename=/dev/nvme0n4 00:13:09.409 Could not set queue depth (nvme0n1) 00:13:09.409 Could not set queue depth (nvme0n2) 00:13:09.409 Could not set queue depth (nvme0n3) 00:13:09.409 Could not set queue depth (nvme0n4) 00:13:09.409 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:09.409 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:09.409 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:09.409 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:09.409 fio-3.35 00:13:09.409 Starting 4 threads 00:13:10.790 00:13:10.790 job0: (groupid=0, jobs=1): err= 0: pid=75096: Mon Jul 22 18:20:22 2024 00:13:10.790 read: IOPS=3166, BW=12.4MiB/s (13.0MB/s)(12.4MiB/1001msec) 00:13:10.790 slat (usec): min=9, max=4692, avg=143.00, stdev=694.97 00:13:10.790 clat (usec): min=538, max=23592, avg=18400.32, stdev=2254.23 00:13:10.790 lat (usec): min=4670, max=24481, avg=18543.33, stdev=2164.99 00:13:10.790 clat percentiles (usec): 00:13:10.790 | 1.00th=[ 5342], 5.00th=[14877], 10.00th=[16712], 20.00th=[17433], 00:13:10.790 | 30.00th=[17957], 40.00th=[18482], 50.00th=[18744], 60.00th=[19006], 00:13:10.790 | 70.00th=[19268], 80.00th=[19530], 90.00th=[20055], 95.00th=[21365], 00:13:10.790 | 99.00th=[23200], 99.50th=[23462], 99.90th=[23462], 99.95th=[23462], 00:13:10.790 | 99.99th=[23462] 00:13:10.790 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:13:10.790 slat (usec): min=11, max=7895, avg=144.17, stdev=639.45 00:13:10.790 clat (usec): min=13902, max=23070, avg=18912.50, stdev=2071.08 00:13:10.790 lat (usec): min=14190, max=23606, avg=19056.67, stdev=2055.62 00:13:10.790 clat percentiles (usec): 00:13:10.790 | 1.00th=[14615], 5.00th=[15270], 10.00th=[15926], 20.00th=[16909], 00:13:10.790 | 30.00th=[17695], 40.00th=[18220], 50.00th=[19268], 60.00th=[19792], 00:13:10.790 | 70.00th=[20317], 80.00th=[20841], 90.00th=[21627], 95.00th=[21890], 00:13:10.790 | 99.00th=[22676], 99.50th=[22938], 99.90th=[22938], 99.95th=[22938], 00:13:10.790 | 99.99th=[23200] 00:13:10.790 bw ( KiB/s): min=14520, max=14520, per=27.85%, avg=14520.00, stdev= 0.00, samples=1 00:13:10.790 iops : min= 3630, max= 3630, avg=3630.00, stdev= 0.00, samples=1 00:13:10.790 lat (usec) : 750=0.01% 00:13:10.790 lat (msec) : 10=0.71%, 20=75.20%, 50=24.07% 00:13:10.790 cpu : usr=3.50%, sys=11.00%, ctx=320, majf=0, minf=3 00:13:10.790 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:13:10.790 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:10.790 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:10.790 issued rwts: total=3170,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:10.790 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:10.790 job1: (groupid=0, jobs=1): err= 0: pid=75097: Mon Jul 22 18:20:22 2024 00:13:10.790 read: IOPS=3065, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1002msec) 00:13:10.790 slat (usec): min=6, max=7154, avg=144.66, stdev=724.61 00:13:10.790 clat (usec): min=12695, max=25256, avg=19801.42, stdev=2320.40 00:13:10.790 lat (usec): min=13702, max=25769, avg=19946.07, stdev=2227.40 00:13:10.790 clat percentiles (usec): 00:13:10.790 | 1.00th=[14353], 5.00th=[16581], 10.00th=[17171], 20.00th=[17695], 00:13:10.790 | 30.00th=[17957], 40.00th=[19268], 50.00th=[19792], 60.00th=[20055], 00:13:10.790 | 70.00th=[20579], 80.00th=[22414], 90.00th=[23200], 95.00th=[23462], 00:13:10.790 | 99.00th=[24773], 99.50th=[25035], 99.90th=[25297], 99.95th=[25297], 00:13:10.790 | 99.99th=[25297] 00:13:10.790 write: IOPS=3362, BW=13.1MiB/s (13.8MB/s)(13.2MiB/1002msec); 0 zone resets 00:13:10.790 slat (usec): min=14, max=5933, avg=156.13, stdev=719.70 00:13:10.790 clat (usec): min=1361, max=26515, avg=19435.51, stdev=3087.29 00:13:10.790 lat (usec): min=1387, max=26540, avg=19591.64, stdev=3056.75 00:13:10.790 clat percentiles (usec): 00:13:10.790 | 1.00th=[ 6980], 5.00th=[15270], 10.00th=[15664], 20.00th=[16450], 00:13:10.790 | 30.00th=[18744], 40.00th=[19530], 50.00th=[20055], 60.00th=[20317], 00:13:10.790 | 70.00th=[20841], 80.00th=[21627], 90.00th=[22676], 95.00th=[23462], 00:13:10.790 | 99.00th=[25822], 99.50th=[26346], 99.90th=[26608], 99.95th=[26608], 00:13:10.790 | 99.99th=[26608] 00:13:10.790 bw ( KiB/s): min=12776, max=13160, per=24.88%, avg=12968.00, stdev=271.53, samples=2 00:13:10.790 iops : min= 3194, max= 3290, avg=3242.00, stdev=67.88, samples=2 00:13:10.790 lat (msec) : 2=0.16%, 10=0.61%, 20=53.98%, 50=45.26% 00:13:10.790 cpu : usr=3.90%, sys=9.89%, ctx=250, majf=0, minf=9 00:13:10.790 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:13:10.790 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:10.790 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:10.790 issued rwts: total=3072,3369,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:10.790 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:10.790 job2: (groupid=0, jobs=1): err= 0: pid=75099: Mon Jul 22 18:20:22 2024 00:13:10.790 read: IOPS=3049, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1005msec) 00:13:10.790 slat (usec): min=8, max=7178, avg=162.96, stdev=899.44 00:13:10.790 clat (usec): min=591, max=29303, avg=20957.73, stdev=2750.62 00:13:10.790 lat (usec): min=5274, max=30010, avg=21120.69, stdev=2851.48 00:13:10.790 clat percentiles (usec): 00:13:10.790 | 1.00th=[ 5932], 5.00th=[16909], 10.00th=[18744], 20.00th=[19530], 00:13:10.790 | 30.00th=[20841], 40.00th=[21365], 50.00th=[21365], 60.00th=[21627], 00:13:10.790 | 70.00th=[21890], 80.00th=[22414], 90.00th=[22938], 95.00th=[23462], 00:13:10.790 | 99.00th=[27395], 99.50th=[27919], 99.90th=[28967], 99.95th=[29230], 00:13:10.790 | 99.99th=[29230] 00:13:10.790 write: IOPS=3056, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1005msec); 0 zone resets 00:13:10.790 slat (usec): min=15, max=6770, avg=155.30, stdev=828.86 00:13:10.790 clat (usec): min=13214, max=28380, avg=20286.56, stdev=1856.33 00:13:10.790 lat (usec): min=13242, max=28455, avg=20441.86, stdev=1907.34 00:13:10.790 clat percentiles (usec): 00:13:10.790 | 1.00th=[14484], 5.00th=[15533], 10.00th=[19268], 20.00th=[19530], 00:13:10.790 | 30.00th=[20055], 40.00th=[20317], 50.00th=[20579], 60.00th=[20841], 00:13:10.790 | 70.00th=[21103], 80.00th=[21103], 90.00th=[21627], 95.00th=[22152], 00:13:10.790 | 99.00th=[25822], 99.50th=[26608], 99.90th=[27132], 99.95th=[27395], 00:13:10.790 | 99.99th=[28443] 00:13:10.790 bw ( KiB/s): min=12263, max=12288, per=23.55%, avg=12275.50, stdev=17.68, samples=2 00:13:10.790 iops : min= 3065, max= 3072, avg=3068.50, stdev= 4.95, samples=2 00:13:10.790 lat (usec) : 750=0.02% 00:13:10.791 lat (msec) : 10=0.68%, 20=24.69%, 50=74.61% 00:13:10.791 cpu : usr=2.99%, sys=9.96%, ctx=190, majf=0, minf=9 00:13:10.791 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:13:10.791 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:10.791 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:10.791 issued rwts: total=3065,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:10.791 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:10.791 job3: (groupid=0, jobs=1): err= 0: pid=75100: Mon Jul 22 18:20:22 2024 00:13:10.791 read: IOPS=2744, BW=10.7MiB/s (11.2MB/s)(10.8MiB/1003msec) 00:13:10.791 slat (usec): min=7, max=5991, avg=167.85, stdev=813.01 00:13:10.791 clat (usec): min=720, max=25227, avg=21586.24, stdev=2817.46 00:13:10.791 lat (usec): min=5189, max=25243, avg=21754.09, stdev=2727.33 00:13:10.791 clat percentiles (usec): 00:13:10.791 | 1.00th=[ 5669], 5.00th=[17433], 10.00th=[19006], 20.00th=[20579], 00:13:10.791 | 30.00th=[21103], 40.00th=[21627], 50.00th=[22152], 60.00th=[22676], 00:13:10.791 | 70.00th=[22676], 80.00th=[23462], 90.00th=[23987], 95.00th=[24773], 00:13:10.791 | 99.00th=[25297], 99.50th=[25297], 99.90th=[25297], 99.95th=[25297], 00:13:10.791 | 99.99th=[25297] 00:13:10.791 write: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1003msec); 0 zone resets 00:13:10.791 slat (usec): min=9, max=6378, avg=167.09, stdev=783.29 00:13:10.791 clat (usec): min=15543, max=27374, avg=21701.24, stdev=2537.87 00:13:10.791 lat (usec): min=15954, max=27396, avg=21868.33, stdev=2479.32 00:13:10.791 clat percentiles (usec): 00:13:10.791 | 1.00th=[16319], 5.00th=[17171], 10.00th=[18220], 20.00th=[20055], 00:13:10.791 | 30.00th=[20317], 40.00th=[20841], 50.00th=[21627], 60.00th=[22152], 00:13:10.791 | 70.00th=[22676], 80.00th=[23987], 90.00th=[25560], 95.00th=[26084], 00:13:10.791 | 99.00th=[27132], 99.50th=[27132], 99.90th=[27395], 99.95th=[27395], 00:13:10.791 | 99.99th=[27395] 00:13:10.791 bw ( KiB/s): min=12288, max=12288, per=23.57%, avg=12288.00, stdev= 0.00, samples=2 00:13:10.791 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:13:10.791 lat (usec) : 750=0.02% 00:13:10.791 lat (msec) : 10=0.55%, 20=17.92%, 50=81.51% 00:13:10.791 cpu : usr=3.39%, sys=9.18%, ctx=277, majf=0, minf=10 00:13:10.791 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:13:10.791 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:10.791 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:10.791 issued rwts: total=2753,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:10.791 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:10.791 00:13:10.791 Run status group 0 (all jobs): 00:13:10.791 READ: bw=46.9MiB/s (49.2MB/s), 10.7MiB/s-12.4MiB/s (11.2MB/s-13.0MB/s), io=47.1MiB (49.4MB), run=1001-1005msec 00:13:10.791 WRITE: bw=50.9MiB/s (53.4MB/s), 11.9MiB/s-14.0MiB/s (12.5MB/s-14.7MB/s), io=51.2MiB (53.6MB), run=1001-1005msec 00:13:10.791 00:13:10.791 Disk stats (read/write): 00:13:10.791 nvme0n1: ios=2834/3072, merge=0/0, ticks=11913/13060, in_queue=24973, util=88.47% 00:13:10.791 nvme0n2: ios=2609/3006, merge=0/0, ticks=11674/13307, in_queue=24981, util=91.30% 00:13:10.791 nvme0n3: ios=2601/2740, merge=0/0, ticks=16789/16362, in_queue=33151, util=91.26% 00:13:10.791 nvme0n4: ios=2433/2560, merge=0/0, ticks=12609/12540, in_queue=25149, util=89.75% 00:13:10.791 18:20:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:13:10.791 [global] 00:13:10.791 thread=1 00:13:10.791 invalidate=1 00:13:10.791 rw=randwrite 00:13:10.791 time_based=1 00:13:10.791 runtime=1 00:13:10.791 ioengine=libaio 00:13:10.791 direct=1 00:13:10.791 bs=4096 00:13:10.791 iodepth=128 00:13:10.791 norandommap=0 00:13:10.791 numjobs=1 00:13:10.791 00:13:10.791 verify_dump=1 00:13:10.791 verify_backlog=512 00:13:10.791 verify_state_save=0 00:13:10.791 do_verify=1 00:13:10.791 verify=crc32c-intel 00:13:10.791 [job0] 00:13:10.791 filename=/dev/nvme0n1 00:13:10.791 [job1] 00:13:10.791 filename=/dev/nvme0n2 00:13:10.791 [job2] 00:13:10.791 filename=/dev/nvme0n3 00:13:10.791 [job3] 00:13:10.791 filename=/dev/nvme0n4 00:13:10.791 Could not set queue depth (nvme0n1) 00:13:10.791 Could not set queue depth (nvme0n2) 00:13:10.791 Could not set queue depth (nvme0n3) 00:13:10.791 Could not set queue depth (nvme0n4) 00:13:10.791 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:10.791 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:10.791 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:10.791 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:10.791 fio-3.35 00:13:10.791 Starting 4 threads 00:13:12.200 00:13:12.200 job0: (groupid=0, jobs=1): err= 0: pid=75153: Mon Jul 22 18:20:23 2024 00:13:12.200 read: IOPS=4884, BW=19.1MiB/s (20.0MB/s)(19.2MiB/1008msec) 00:13:12.200 slat (usec): min=5, max=12006, avg=107.89, stdev=688.03 00:13:12.200 clat (usec): min=3218, max=25176, avg=13562.52, stdev=3610.29 00:13:12.200 lat (usec): min=5326, max=25192, avg=13670.41, stdev=3640.67 00:13:12.200 clat percentiles (usec): 00:13:12.200 | 1.00th=[ 5932], 5.00th=[ 9765], 10.00th=[10028], 20.00th=[10945], 00:13:12.200 | 30.00th=[11600], 40.00th=[11994], 50.00th=[12518], 60.00th=[12911], 00:13:12.200 | 70.00th=[14615], 80.00th=[16057], 90.00th=[19268], 95.00th=[21627], 00:13:12.200 | 99.00th=[23462], 99.50th=[23725], 99.90th=[25035], 99.95th=[25035], 00:13:12.200 | 99.99th=[25297] 00:13:12.200 write: IOPS=5079, BW=19.8MiB/s (20.8MB/s)(20.0MiB/1008msec); 0 zone resets 00:13:12.200 slat (usec): min=3, max=10361, avg=84.17, stdev=329.83 00:13:12.200 clat (usec): min=3901, max=25102, avg=11891.66, stdev=2569.87 00:13:12.200 lat (usec): min=3937, max=25112, avg=11975.83, stdev=2592.91 00:13:12.200 clat percentiles (usec): 00:13:12.200 | 1.00th=[ 5211], 5.00th=[ 6128], 10.00th=[ 7046], 20.00th=[10290], 00:13:12.200 | 30.00th=[12256], 40.00th=[12911], 50.00th=[13173], 60.00th=[13304], 00:13:12.200 | 70.00th=[13304], 80.00th=[13435], 90.00th=[13566], 95.00th=[13698], 00:13:12.200 | 99.00th=[13960], 99.50th=[14091], 99.90th=[23725], 99.95th=[24249], 00:13:12.200 | 99.99th=[25035] 00:13:12.200 bw ( KiB/s): min=20480, max=20521, per=35.92%, avg=20500.50, stdev=28.99, samples=2 00:13:12.200 iops : min= 5120, max= 5130, avg=5125.00, stdev= 7.07, samples=2 00:13:12.201 lat (msec) : 4=0.05%, 10=13.95%, 20=81.65%, 50=4.35% 00:13:12.201 cpu : usr=4.87%, sys=12.51%, ctx=774, majf=0, minf=13 00:13:12.201 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:13:12.201 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:12.201 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:12.201 issued rwts: total=4924,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:12.201 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:12.201 job1: (groupid=0, jobs=1): err= 0: pid=75154: Mon Jul 22 18:20:23 2024 00:13:12.201 read: IOPS=2188, BW=8754KiB/s (8964kB/s)(8824KiB/1008msec) 00:13:12.201 slat (usec): min=3, max=23828, avg=235.21, stdev=1547.74 00:13:12.201 clat (usec): min=6010, max=91170, avg=28137.70, stdev=14585.89 00:13:12.201 lat (usec): min=6022, max=91181, avg=28372.91, stdev=14689.50 00:13:12.201 clat percentiles (usec): 00:13:12.201 | 1.00th=[ 9503], 5.00th=[11863], 10.00th=[14091], 20.00th=[14484], 00:13:12.201 | 30.00th=[19530], 40.00th=[24511], 50.00th=[27132], 60.00th=[28967], 00:13:12.201 | 70.00th=[32375], 80.00th=[33162], 90.00th=[43779], 95.00th=[49021], 00:13:12.201 | 99.00th=[86508], 99.50th=[89654], 99.90th=[90702], 99.95th=[90702], 00:13:12.202 | 99.99th=[90702] 00:13:12.202 write: IOPS=2539, BW=9.92MiB/s (10.4MB/s)(10.0MiB/1008msec); 0 zone resets 00:13:12.202 slat (usec): min=4, max=25194, avg=180.31, stdev=1113.75 00:13:12.202 clat (usec): min=4137, max=91142, avg=25675.50, stdev=9106.90 00:13:12.202 lat (usec): min=4161, max=91148, avg=25855.81, stdev=9197.42 00:13:12.202 clat percentiles (usec): 00:13:12.202 | 1.00th=[ 8848], 5.00th=[11863], 10.00th=[12518], 20.00th=[16909], 00:13:12.202 | 30.00th=[23200], 40.00th=[26870], 50.00th=[28181], 60.00th=[28443], 00:13:12.202 | 70.00th=[28967], 80.00th=[31327], 90.00th=[31851], 95.00th=[33424], 00:13:12.202 | 99.00th=[63701], 99.50th=[71828], 99.90th=[77071], 99.95th=[90702], 00:13:12.202 | 99.99th=[90702] 00:13:12.202 bw ( KiB/s): min=10128, max=10352, per=17.94%, avg=10240.00, stdev=158.39, samples=2 00:13:12.202 iops : min= 2532, max= 2588, avg=2560.00, stdev=39.60, samples=2 00:13:12.202 lat (msec) : 10=1.51%, 20=26.19%, 50=69.07%, 100=3.23% 00:13:12.202 cpu : usr=2.48%, sys=6.26%, ctx=342, majf=0, minf=9 00:13:12.202 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:13:12.202 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:12.202 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:12.202 issued rwts: total=2206,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:12.202 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:12.202 job2: (groupid=0, jobs=1): err= 0: pid=75155: Mon Jul 22 18:20:23 2024 00:13:12.202 read: IOPS=2029, BW=8119KiB/s (8314kB/s)(8192KiB/1009msec) 00:13:12.203 slat (usec): min=5, max=37183, avg=242.84, stdev=1666.09 00:13:12.203 clat (usec): min=8535, max=70666, avg=30420.68, stdev=10233.55 00:13:12.203 lat (usec): min=8550, max=76404, avg=30663.52, stdev=10369.30 00:13:12.203 clat percentiles (usec): 00:13:12.203 | 1.00th=[13698], 5.00th=[15664], 10.00th=[16188], 20.00th=[19268], 00:13:12.203 | 30.00th=[25822], 40.00th=[30016], 50.00th=[31589], 60.00th=[32637], 00:13:12.203 | 70.00th=[33424], 80.00th=[37487], 90.00th=[45876], 95.00th=[50594], 00:13:12.203 | 99.00th=[54789], 99.50th=[56361], 99.90th=[60031], 99.95th=[62653], 00:13:12.203 | 99.99th=[70779] 00:13:12.203 write: IOPS=2089, BW=8357KiB/s (8557kB/s)(8432KiB/1009msec); 0 zone resets 00:13:12.203 slat (usec): min=6, max=41999, avg=229.29, stdev=1610.40 00:13:12.203 clat (usec): min=5069, max=73467, avg=31020.05, stdev=10609.47 00:13:12.203 lat (usec): min=5107, max=73527, avg=31249.34, stdev=10714.05 00:13:12.203 clat percentiles (usec): 00:13:12.203 | 1.00th=[ 7832], 5.00th=[13960], 10.00th=[23200], 20.00th=[27132], 00:13:12.203 | 30.00th=[28181], 40.00th=[28443], 50.00th=[28705], 60.00th=[30016], 00:13:12.203 | 70.00th=[31851], 80.00th=[32637], 90.00th=[44827], 95.00th=[53740], 00:13:12.203 | 99.00th=[69731], 99.50th=[70779], 99.90th=[72877], 99.95th=[72877], 00:13:12.203 | 99.99th=[73925] 00:13:12.203 bw ( KiB/s): min= 8192, max= 8208, per=14.37%, avg=8200.00, stdev=11.31, samples=2 00:13:12.203 iops : min= 2048, max= 2052, avg=2050.00, stdev= 2.83, samples=2 00:13:12.203 lat (msec) : 10=1.92%, 20=12.39%, 50=80.05%, 100=5.63% 00:13:12.203 cpu : usr=1.98%, sys=6.45%, ctx=349, majf=0, minf=15 00:13:12.203 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:13:12.203 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:12.203 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:12.203 issued rwts: total=2048,2108,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:12.203 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:12.203 job3: (groupid=0, jobs=1): err= 0: pid=75157: Mon Jul 22 18:20:23 2024 00:13:12.203 read: IOPS=4334, BW=16.9MiB/s (17.8MB/s)(17.0MiB/1003msec) 00:13:12.203 slat (usec): min=5, max=13365, avg=120.47, stdev=796.29 00:13:12.203 clat (usec): min=1510, max=27555, avg=15151.57, stdev=3824.30 00:13:12.203 lat (usec): min=5501, max=29058, avg=15272.04, stdev=3862.18 00:13:12.203 clat percentiles (usec): 00:13:12.203 | 1.00th=[ 6456], 5.00th=[10814], 10.00th=[11207], 20.00th=[12649], 00:13:12.203 | 30.00th=[13304], 40.00th=[13829], 50.00th=[14222], 60.00th=[14615], 00:13:12.203 | 70.00th=[15664], 80.00th=[17433], 90.00th=[20841], 95.00th=[23725], 00:13:12.204 | 99.00th=[26084], 99.50th=[26608], 99.90th=[27395], 99.95th=[27395], 00:13:12.204 | 99.99th=[27657] 00:13:12.204 write: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec); 0 zone resets 00:13:12.204 slat (usec): min=4, max=11097, avg=95.83, stdev=470.23 00:13:12.204 clat (usec): min=3616, max=27465, avg=13275.09, stdev=2762.20 00:13:12.204 lat (usec): min=3642, max=27482, avg=13370.92, stdev=2804.56 00:13:12.204 clat percentiles (usec): 00:13:12.204 | 1.00th=[ 5342], 5.00th=[ 6587], 10.00th=[ 8291], 20.00th=[12518], 00:13:12.204 | 30.00th=[13173], 40.00th=[13960], 50.00th=[14484], 60.00th=[14746], 00:13:12.204 | 70.00th=[14877], 80.00th=[15008], 90.00th=[15139], 95.00th=[15270], 00:13:12.204 | 99.00th=[15533], 99.50th=[15533], 99.90th=[26608], 99.95th=[26870], 00:13:12.204 | 99.99th=[27395] 00:13:12.204 bw ( KiB/s): min=17744, max=19158, per=32.33%, avg=18451.00, stdev=999.85, samples=2 00:13:12.204 iops : min= 4436, max= 4789, avg=4612.50, stdev=249.61, samples=2 00:13:12.204 lat (msec) : 2=0.01%, 4=0.08%, 10=8.21%, 20=85.61%, 50=6.10% 00:13:12.204 cpu : usr=3.69%, sys=11.88%, ctx=600, majf=0, minf=13 00:13:12.204 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:13:12.204 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:12.204 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:12.204 issued rwts: total=4348,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:12.204 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:12.204 00:13:12.204 Run status group 0 (all jobs): 00:13:12.204 READ: bw=52.4MiB/s (54.9MB/s), 8119KiB/s-19.1MiB/s (8314kB/s-20.0MB/s), io=52.8MiB (55.4MB), run=1003-1009msec 00:13:12.204 WRITE: bw=55.7MiB/s (58.4MB/s), 8357KiB/s-19.8MiB/s (8557kB/s-20.8MB/s), io=56.2MiB (59.0MB), run=1003-1009msec 00:13:12.204 00:13:12.204 Disk stats (read/write): 00:13:12.204 nvme0n1: ios=4145/4479, merge=0/0, ticks=51429/51595, in_queue=103024, util=88.45% 00:13:12.204 nvme0n2: ios=2077/2078, merge=0/0, ticks=49983/49462, in_queue=99445, util=88.30% 00:13:12.204 nvme0n3: ios=1536/2002, merge=0/0, ticks=44029/56200, in_queue=100229, util=89.23% 00:13:12.204 nvme0n4: ios=3584/4095, merge=0/0, ticks=50616/52511, in_queue=103127, util=89.69% 00:13:12.204 18:20:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:13:12.204 18:20:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=75177 00:13:12.204 18:20:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:13:12.204 18:20:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:13:12.204 [global] 00:13:12.204 thread=1 00:13:12.204 invalidate=1 00:13:12.204 rw=read 00:13:12.204 time_based=1 00:13:12.204 runtime=10 00:13:12.204 ioengine=libaio 00:13:12.204 direct=1 00:13:12.204 bs=4096 00:13:12.204 iodepth=1 00:13:12.204 norandommap=1 00:13:12.204 numjobs=1 00:13:12.204 00:13:12.204 [job0] 00:13:12.204 filename=/dev/nvme0n1 00:13:12.204 [job1] 00:13:12.204 filename=/dev/nvme0n2 00:13:12.204 [job2] 00:13:12.204 filename=/dev/nvme0n3 00:13:12.204 [job3] 00:13:12.204 filename=/dev/nvme0n4 00:13:12.204 Could not set queue depth (nvme0n1) 00:13:12.204 Could not set queue depth (nvme0n2) 00:13:12.204 Could not set queue depth (nvme0n3) 00:13:12.204 Could not set queue depth (nvme0n4) 00:13:12.204 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:12.204 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:12.205 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:12.205 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:12.205 fio-3.35 00:13:12.205 Starting 4 threads 00:13:15.490 18:20:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:13:15.490 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=45248512, buflen=4096 00:13:15.490 fio: pid=75220, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:13:15.490 18:20:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:13:15.490 fio: pid=75219, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:13:15.490 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=52133888, buflen=4096 00:13:15.491 18:20:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:15.491 18:20:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:13:15.749 fio: pid=75217, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:13:15.749 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=55783424, buflen=4096 00:13:16.007 18:20:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:16.007 18:20:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:13:16.007 fio: pid=75218, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:13:16.007 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=61796352, buflen=4096 00:13:16.265 00:13:16.265 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=75217: Mon Jul 22 18:20:28 2024 00:13:16.266 read: IOPS=4011, BW=15.7MiB/s (16.4MB/s)(53.2MiB/3395msec) 00:13:16.266 slat (usec): min=11, max=17437, avg=21.46, stdev=199.14 00:13:16.266 clat (usec): min=137, max=4971, avg=226.11, stdev=84.83 00:13:16.266 lat (usec): min=191, max=17698, avg=247.58, stdev=216.88 00:13:16.266 clat percentiles (usec): 00:13:16.266 | 1.00th=[ 184], 5.00th=[ 190], 10.00th=[ 192], 20.00th=[ 196], 00:13:16.266 | 30.00th=[ 200], 40.00th=[ 202], 50.00th=[ 206], 60.00th=[ 212], 00:13:16.266 | 70.00th=[ 225], 80.00th=[ 249], 90.00th=[ 289], 95.00th=[ 322], 00:13:16.266 | 99.00th=[ 371], 99.50th=[ 424], 99.90th=[ 750], 99.95th=[ 2040], 00:13:16.266 | 99.99th=[ 3818] 00:13:16.266 bw ( KiB/s): min=14096, max=18384, per=28.91%, avg=15952.00, stdev=1816.36, samples=6 00:13:16.266 iops : min= 3524, max= 4596, avg=3988.00, stdev=454.09, samples=6 00:13:16.266 lat (usec) : 250=80.51%, 500=19.21%, 750=0.18%, 1000=0.04% 00:13:16.266 lat (msec) : 4=0.04%, 10=0.01% 00:13:16.266 cpu : usr=1.36%, sys=5.95%, ctx=13630, majf=0, minf=1 00:13:16.266 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:16.266 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:16.266 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:16.266 issued rwts: total=13620,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:16.266 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:16.266 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=75218: Mon Jul 22 18:20:28 2024 00:13:16.266 read: IOPS=3965, BW=15.5MiB/s (16.2MB/s)(58.9MiB/3805msec) 00:13:16.266 slat (usec): min=12, max=12826, avg=18.87, stdev=153.95 00:13:16.266 clat (usec): min=172, max=166520, avg=231.74, stdev=1354.92 00:13:16.266 lat (usec): min=186, max=166537, avg=250.61, stdev=1363.74 00:13:16.266 clat percentiles (usec): 00:13:16.266 | 1.00th=[ 184], 5.00th=[ 190], 10.00th=[ 192], 20.00th=[ 198], 00:13:16.266 | 30.00th=[ 200], 40.00th=[ 204], 50.00th=[ 208], 60.00th=[ 212], 00:13:16.266 | 70.00th=[ 221], 80.00th=[ 237], 90.00th=[ 273], 95.00th=[ 297], 00:13:16.266 | 99.00th=[ 326], 99.50th=[ 338], 99.90th=[ 873], 99.95th=[ 1254], 00:13:16.266 | 99.99th=[ 2147] 00:13:16.266 bw ( KiB/s): min=12090, max=18040, per=28.74%, avg=15857.43, stdev=2231.43, samples=7 00:13:16.266 iops : min= 3022, max= 4510, avg=3964.29, stdev=558.00, samples=7 00:13:16.266 lat (usec) : 250=84.25%, 500=15.56%, 750=0.05%, 1000=0.05% 00:13:16.266 lat (msec) : 2=0.06%, 4=0.01%, 250=0.01% 00:13:16.266 cpu : usr=1.26%, sys=5.23%, ctx=15094, majf=0, minf=1 00:13:16.266 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:16.266 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:16.266 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:16.266 issued rwts: total=15088,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:16.266 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:16.266 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=75219: Mon Jul 22 18:20:28 2024 00:13:16.266 read: IOPS=4031, BW=15.7MiB/s (16.5MB/s)(49.7MiB/3157msec) 00:13:16.266 slat (usec): min=13, max=9300, avg=18.38, stdev=106.95 00:13:16.266 clat (usec): min=183, max=2432, avg=228.07, stdev=49.18 00:13:16.266 lat (usec): min=198, max=9556, avg=246.45, stdev=118.11 00:13:16.266 clat percentiles (usec): 00:13:16.266 | 1.00th=[ 192], 5.00th=[ 198], 10.00th=[ 200], 20.00th=[ 204], 00:13:16.266 | 30.00th=[ 208], 40.00th=[ 212], 50.00th=[ 217], 60.00th=[ 221], 00:13:16.266 | 70.00th=[ 229], 80.00th=[ 247], 90.00th=[ 277], 95.00th=[ 297], 00:13:16.266 | 99.00th=[ 322], 99.50th=[ 330], 99.90th=[ 553], 99.95th=[ 914], 00:13:16.266 | 99.99th=[ 2212] 00:13:16.266 bw ( KiB/s): min=14008, max=17816, per=29.15%, avg=16084.00, stdev=1484.25, samples=6 00:13:16.266 iops : min= 3502, max= 4454, avg=4021.00, stdev=371.06, samples=6 00:13:16.266 lat (usec) : 250=80.97%, 500=18.91%, 750=0.05%, 1000=0.02% 00:13:16.266 lat (msec) : 2=0.02%, 4=0.02% 00:13:16.266 cpu : usr=1.24%, sys=5.51%, ctx=12733, majf=0, minf=1 00:13:16.266 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:16.266 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:16.266 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:16.266 issued rwts: total=12729,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:16.266 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:16.266 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=75220: Mon Jul 22 18:20:28 2024 00:13:16.266 read: IOPS=3779, BW=14.8MiB/s (15.5MB/s)(43.2MiB/2923msec) 00:13:16.266 slat (usec): min=12, max=295, avg=15.37, stdev= 4.45 00:13:16.266 clat (usec): min=58, max=7405, avg=247.64, stdev=91.89 00:13:16.266 lat (usec): min=196, max=7419, avg=263.01, stdev=92.45 00:13:16.266 clat percentiles (usec): 00:13:16.266 | 1.00th=[ 198], 5.00th=[ 204], 10.00th=[ 206], 20.00th=[ 212], 00:13:16.266 | 30.00th=[ 217], 40.00th=[ 221], 50.00th=[ 227], 60.00th=[ 239], 00:13:16.266 | 70.00th=[ 265], 80.00th=[ 281], 90.00th=[ 310], 95.00th=[ 338], 00:13:16.266 | 99.00th=[ 375], 99.50th=[ 441], 99.90th=[ 635], 99.95th=[ 930], 00:13:16.266 | 99.99th=[ 3458] 00:13:16.266 bw ( KiB/s): min=13264, max=17120, per=27.82%, avg=15347.20, stdev=1856.66, samples=5 00:13:16.266 iops : min= 3316, max= 4280, avg=3836.80, stdev=464.17, samples=5 00:13:16.266 lat (usec) : 100=0.01%, 250=63.99%, 500=35.67%, 750=0.25%, 1000=0.03% 00:13:16.266 lat (msec) : 4=0.03%, 10=0.01% 00:13:16.266 cpu : usr=1.20%, sys=4.89%, ctx=11052, majf=0, minf=1 00:13:16.266 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:16.266 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:16.266 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:16.266 issued rwts: total=11048,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:16.266 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:16.266 00:13:16.266 Run status group 0 (all jobs): 00:13:16.266 READ: bw=53.9MiB/s (56.5MB/s), 14.8MiB/s-15.7MiB/s (15.5MB/s-16.5MB/s), io=205MiB (215MB), run=2923-3805msec 00:13:16.266 00:13:16.266 Disk stats (read/write): 00:13:16.266 nvme0n1: ios=13539/0, merge=0/0, ticks=3124/0, in_queue=3124, util=95.28% 00:13:16.266 nvme0n2: ios=14241/0, merge=0/0, ticks=3399/0, in_queue=3399, util=95.88% 00:13:16.266 nvme0n3: ios=12595/0, merge=0/0, ticks=2926/0, in_queue=2926, util=96.31% 00:13:16.266 nvme0n4: ios=10921/0, merge=0/0, ticks=2723/0, in_queue=2723, util=96.57% 00:13:16.266 18:20:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:16.266 18:20:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:13:16.832 18:20:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:16.832 18:20:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:13:17.090 18:20:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:17.090 18:20:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:13:17.655 18:20:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:17.655 18:20:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:13:17.913 18:20:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:17.913 18:20:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:13:18.480 18:20:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:13:18.480 18:20:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 75177 00:13:18.480 18:20:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:13:18.480 18:20:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:18.480 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:18.480 18:20:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:18.480 18:20:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:13:18.480 18:20:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:18.480 18:20:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:18.480 18:20:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:18.480 18:20:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:18.480 nvmf hotplug test: fio failed as expected 00:13:18.480 18:20:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:13:18.480 18:20:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:13:18.480 18:20:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:13:18.480 18:20:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:18.738 18:20:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:13:18.738 18:20:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:13:18.738 18:20:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:13:18.738 18:20:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:13:18.738 18:20:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:13:18.738 18:20:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:18.738 18:20:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:13:18.738 18:20:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:18.738 18:20:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:13:18.738 18:20:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:18.738 18:20:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:18.738 rmmod nvme_tcp 00:13:18.738 rmmod nvme_fabrics 00:13:18.738 rmmod nvme_keyring 00:13:18.738 18:20:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:18.738 18:20:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:13:18.738 18:20:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:13:18.738 18:20:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 74674 ']' 00:13:18.739 18:20:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 74674 00:13:18.739 18:20:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 74674 ']' 00:13:18.739 18:20:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 74674 00:13:18.739 18:20:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:13:18.739 18:20:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:18.997 18:20:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74674 00:13:18.997 killing process with pid 74674 00:13:18.997 18:20:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:18.997 18:20:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:18.997 18:20:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74674' 00:13:18.997 18:20:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 74674 00:13:18.997 18:20:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 74674 00:13:20.371 18:20:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:20.371 18:20:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:20.371 18:20:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:20.371 18:20:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:20.371 18:20:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:20.371 18:20:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:20.371 18:20:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:20.371 18:20:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:20.371 18:20:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:13:20.371 00:13:20.371 real 0m22.404s 00:13:20.371 user 1m23.909s 00:13:20.371 sys 0m9.171s 00:13:20.371 18:20:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:20.371 ************************************ 00:13:20.371 18:20:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:20.371 END TEST nvmf_fio_target 00:13:20.371 ************************************ 00:13:20.371 18:20:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:13:20.371 18:20:32 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:13:20.371 18:20:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:20.371 18:20:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:20.371 18:20:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:20.371 ************************************ 00:13:20.371 START TEST nvmf_bdevio 00:13:20.371 ************************************ 00:13:20.371 18:20:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:13:20.371 * Looking for test storage... 00:13:20.371 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:20.371 18:20:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:20.371 18:20:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:13:20.371 18:20:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:20.371 18:20:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:20.371 18:20:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:20.371 18:20:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:20.371 18:20:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:20.371 18:20:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:20.371 18:20:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:20.371 18:20:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:20.371 18:20:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:20.371 18:20:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:20.371 18:20:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:13:20.371 18:20:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=0b8484e2-e129-4a11-8748-0b3c728771da 00:13:20.371 18:20:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:20.371 18:20:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:20.371 18:20:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:20.371 18:20:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:20.371 18:20:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:20.371 18:20:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:20.371 18:20:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:20.371 18:20:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:20.371 18:20:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.371 18:20:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.371 18:20:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.371 18:20:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:13:20.371 18:20:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.371 18:20:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:13:20.371 18:20:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:20.371 18:20:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:20.371 18:20:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:20.371 18:20:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:20.371 18:20:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:20.371 18:20:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:20.371 18:20:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:20.371 18:20:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:20.371 18:20:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:20.371 18:20:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:20.371 18:20:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:13:20.371 18:20:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:20.371 18:20:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:20.371 18:20:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:20.371 18:20:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:20.371 18:20:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:20.371 18:20:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:20.371 18:20:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:20.371 18:20:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:20.371 18:20:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:13:20.371 18:20:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:13:20.371 18:20:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:13:20.371 18:20:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:13:20.371 18:20:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:13:20.371 18:20:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # nvmf_veth_init 00:13:20.371 18:20:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:20.371 18:20:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:20.372 18:20:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:20.372 18:20:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:13:20.372 18:20:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:20.372 18:20:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:20.372 18:20:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:20.372 18:20:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:20.372 18:20:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:20.372 18:20:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:20.372 18:20:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:20.372 18:20:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:20.372 18:20:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:13:20.372 18:20:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:13:20.372 Cannot find device "nvmf_tgt_br" 00:13:20.372 18:20:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@155 -- # true 00:13:20.372 18:20:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:13:20.372 Cannot find device "nvmf_tgt_br2" 00:13:20.372 18:20:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # true 00:13:20.372 18:20:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:13:20.372 18:20:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:13:20.372 Cannot find device "nvmf_tgt_br" 00:13:20.372 18:20:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@158 -- # true 00:13:20.372 18:20:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:13:20.372 Cannot find device "nvmf_tgt_br2" 00:13:20.372 18:20:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # true 00:13:20.372 18:20:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:13:20.372 18:20:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:13:20.372 18:20:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:20.372 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:20.372 18:20:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:13:20.372 18:20:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:20.372 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:20.372 18:20:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:13:20.372 18:20:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:13:20.372 18:20:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:20.372 18:20:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:20.372 18:20:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:20.372 18:20:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:20.372 18:20:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:20.631 18:20:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:20.631 18:20:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:20.631 18:20:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:20.631 18:20:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:13:20.631 18:20:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:13:20.631 18:20:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:13:20.631 18:20:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:13:20.631 18:20:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:20.631 18:20:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:20.631 18:20:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:20.631 18:20:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:13:20.631 18:20:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:13:20.631 18:20:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:13:20.631 18:20:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:20.631 18:20:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:20.631 18:20:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:20.631 18:20:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:20.631 18:20:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:13:20.631 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:20.631 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.114 ms 00:13:20.631 00:13:20.631 --- 10.0.0.2 ping statistics --- 00:13:20.631 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:20.631 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:13:20.631 18:20:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:13:20.631 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:20.631 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.078 ms 00:13:20.631 00:13:20.631 --- 10.0.0.3 ping statistics --- 00:13:20.631 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:20.631 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:13:20.631 18:20:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:20.631 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:20.631 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.051 ms 00:13:20.631 00:13:20.631 --- 10.0.0.1 ping statistics --- 00:13:20.631 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:20.631 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:13:20.631 18:20:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:20.631 18:20:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@433 -- # return 0 00:13:20.631 18:20:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:20.631 18:20:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:20.631 18:20:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:20.631 18:20:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:20.631 18:20:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:20.631 18:20:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:20.631 18:20:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:20.631 18:20:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:13:20.631 18:20:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:20.631 18:20:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:20.631 18:20:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:20.631 18:20:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=75565 00:13:20.631 18:20:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 75565 00:13:20.631 18:20:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:13:20.631 18:20:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 75565 ']' 00:13:20.631 18:20:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:20.631 18:20:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:20.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:20.631 18:20:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:20.631 18:20:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:20.631 18:20:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:20.889 [2024-07-22 18:20:32.680099] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:13:20.889 [2024-07-22 18:20:32.680248] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:20.889 [2024-07-22 18:20:32.854167] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:21.148 [2024-07-22 18:20:33.148011] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:21.148 [2024-07-22 18:20:33.148103] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:21.148 [2024-07-22 18:20:33.148125] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:21.148 [2024-07-22 18:20:33.148145] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:21.148 [2024-07-22 18:20:33.148160] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:21.148 [2024-07-22 18:20:33.148419] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:13:21.148 [2024-07-22 18:20:33.148567] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:13:21.148 [2024-07-22 18:20:33.148671] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:21.148 [2024-07-22 18:20:33.148685] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:13:21.732 18:20:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:21.732 18:20:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:13:21.732 18:20:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:21.732 18:20:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:21.733 18:20:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:21.733 18:20:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:21.733 18:20:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:21.733 18:20:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:21.733 18:20:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:21.733 [2024-07-22 18:20:33.713860] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:21.733 18:20:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:21.733 18:20:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:21.733 18:20:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:21.733 18:20:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:21.992 Malloc0 00:13:21.992 18:20:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:21.992 18:20:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:21.992 18:20:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:21.992 18:20:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:21.992 18:20:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:21.992 18:20:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:21.992 18:20:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:21.992 18:20:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:21.992 18:20:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:21.992 18:20:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:21.992 18:20:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:21.992 18:20:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:21.992 [2024-07-22 18:20:33.832242] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:21.992 18:20:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:21.992 18:20:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:13:21.992 18:20:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:13:21.992 18:20:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:13:21.992 18:20:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:13:21.992 18:20:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:21.992 18:20:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:21.992 { 00:13:21.992 "params": { 00:13:21.992 "name": "Nvme$subsystem", 00:13:21.992 "trtype": "$TEST_TRANSPORT", 00:13:21.992 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:21.992 "adrfam": "ipv4", 00:13:21.992 "trsvcid": "$NVMF_PORT", 00:13:21.992 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:21.992 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:21.992 "hdgst": ${hdgst:-false}, 00:13:21.992 "ddgst": ${ddgst:-false} 00:13:21.992 }, 00:13:21.992 "method": "bdev_nvme_attach_controller" 00:13:21.992 } 00:13:21.992 EOF 00:13:21.992 )") 00:13:21.992 18:20:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:13:21.992 18:20:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:13:21.992 18:20:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:13:21.992 18:20:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:21.992 "params": { 00:13:21.992 "name": "Nvme1", 00:13:21.992 "trtype": "tcp", 00:13:21.992 "traddr": "10.0.0.2", 00:13:21.992 "adrfam": "ipv4", 00:13:21.992 "trsvcid": "4420", 00:13:21.992 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:21.992 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:21.992 "hdgst": false, 00:13:21.992 "ddgst": false 00:13:21.992 }, 00:13:21.992 "method": "bdev_nvme_attach_controller" 00:13:21.992 }' 00:13:21.992 [2024-07-22 18:20:33.954366] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:13:21.992 [2024-07-22 18:20:33.954556] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75620 ] 00:13:22.251 [2024-07-22 18:20:34.138329] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:22.509 [2024-07-22 18:20:34.421671] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:22.509 [2024-07-22 18:20:34.421799] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:22.509 [2024-07-22 18:20:34.421811] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:23.074 I/O targets: 00:13:23.074 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:13:23.074 00:13:23.074 00:13:23.074 CUnit - A unit testing framework for C - Version 2.1-3 00:13:23.074 http://cunit.sourceforge.net/ 00:13:23.074 00:13:23.074 00:13:23.074 Suite: bdevio tests on: Nvme1n1 00:13:23.074 Test: blockdev write read block ...passed 00:13:23.074 Test: blockdev write zeroes read block ...passed 00:13:23.074 Test: blockdev write zeroes read no split ...passed 00:13:23.074 Test: blockdev write zeroes read split ...passed 00:13:23.074 Test: blockdev write zeroes read split partial ...passed 00:13:23.074 Test: blockdev reset ...[2024-07-22 18:20:35.005534] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:13:23.074 [2024-07-22 18:20:35.005739] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002b280 (9): Bad file descriptor 00:13:23.074 [2024-07-22 18:20:35.025997] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:13:23.074 passed 00:13:23.074 Test: blockdev write read 8 blocks ...passed 00:13:23.074 Test: blockdev write read size > 128k ...passed 00:13:23.074 Test: blockdev write read invalid size ...passed 00:13:23.074 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:23.074 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:23.074 Test: blockdev write read max offset ...passed 00:13:23.333 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:23.333 Test: blockdev writev readv 8 blocks ...passed 00:13:23.333 Test: blockdev writev readv 30 x 1block ...passed 00:13:23.333 Test: blockdev writev readv block ...passed 00:13:23.333 Test: blockdev writev readv size > 128k ...passed 00:13:23.333 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:23.333 Test: blockdev comparev and writev ...[2024-07-22 18:20:35.208441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:23.333 [2024-07-22 18:20:35.208544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:13:23.333 [2024-07-22 18:20:35.208576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:23.333 [2024-07-22 18:20:35.208595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:13:23.333 [2024-07-22 18:20:35.209038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:23.333 [2024-07-22 18:20:35.209074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:13:23.333 [2024-07-22 18:20:35.209102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:23.333 [2024-07-22 18:20:35.209119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:13:23.333 [2024-07-22 18:20:35.209577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:23.333 [2024-07-22 18:20:35.209616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:13:23.333 [2024-07-22 18:20:35.209644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:23.333 [2024-07-22 18:20:35.209660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:13:23.333 [2024-07-22 18:20:35.210116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:23.333 [2024-07-22 18:20:35.210157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:13:23.333 [2024-07-22 18:20:35.210183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:23.333 [2024-07-22 18:20:35.210200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:13:23.333 passed 00:13:23.333 Test: blockdev nvme passthru rw ...passed 00:13:23.333 Test: blockdev nvme passthru vendor specific ...[2024-07-22 18:20:35.293664] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:23.333 [2024-07-22 18:20:35.294064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:13:23.333 [2024-07-22 18:20:35.294419] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:23.333 [2024-07-22 18:20:35.294478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:13:23.333 [2024-07-22 18:20:35.294819] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:23.333 [2024-07-22 18:20:35.294869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:13:23.333 [2024-07-22 18:20:35.295193] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:23.333 [2024-07-22 18:20:35.295231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:13:23.333 passed 00:13:23.333 Test: blockdev nvme admin passthru ...passed 00:13:23.333 Test: blockdev copy ...passed 00:13:23.333 00:13:23.333 Run Summary: Type Total Ran Passed Failed Inactive 00:13:23.333 suites 1 1 n/a 0 0 00:13:23.333 tests 23 23 23 0 0 00:13:23.333 asserts 152 152 152 0 n/a 00:13:23.333 00:13:23.333 Elapsed time = 1.075 seconds 00:13:24.708 18:20:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:24.708 18:20:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:24.708 18:20:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:24.708 18:20:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:24.708 18:20:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:13:24.708 18:20:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:13:24.708 18:20:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:24.708 18:20:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:13:24.708 18:20:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:24.708 18:20:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:13:24.708 18:20:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:24.708 18:20:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:24.708 rmmod nvme_tcp 00:13:24.708 rmmod nvme_fabrics 00:13:24.708 rmmod nvme_keyring 00:13:24.708 18:20:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:24.708 18:20:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:13:24.708 18:20:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:13:24.708 18:20:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 75565 ']' 00:13:24.708 18:20:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 75565 00:13:24.708 18:20:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 75565 ']' 00:13:24.708 18:20:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 75565 00:13:24.708 18:20:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:13:24.708 18:20:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:24.708 18:20:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75565 00:13:24.708 18:20:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:13:24.708 18:20:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:13:24.708 killing process with pid 75565 00:13:24.708 18:20:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75565' 00:13:24.708 18:20:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 75565 00:13:24.708 18:20:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 75565 00:13:26.087 18:20:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:26.087 18:20:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:26.087 18:20:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:26.088 18:20:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:26.088 18:20:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:26.088 18:20:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:26.088 18:20:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:26.088 18:20:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:26.088 18:20:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:13:26.088 00:13:26.088 real 0m5.996s 00:13:26.088 user 0m23.627s 00:13:26.088 sys 0m1.109s 00:13:26.088 18:20:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:26.088 18:20:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:26.088 ************************************ 00:13:26.088 END TEST nvmf_bdevio 00:13:26.088 ************************************ 00:13:26.346 18:20:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:13:26.346 18:20:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:13:26.346 00:13:26.346 real 4m5.050s 00:13:26.346 user 12m46.076s 00:13:26.346 sys 1m5.665s 00:13:26.346 18:20:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:26.346 18:20:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:26.346 ************************************ 00:13:26.346 END TEST nvmf_target_core 00:13:26.346 ************************************ 00:13:26.346 18:20:38 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:26.346 18:20:38 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:13:26.346 18:20:38 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:26.346 18:20:38 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:26.346 18:20:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:26.346 ************************************ 00:13:26.346 START TEST nvmf_target_extra 00:13:26.346 ************************************ 00:13:26.346 18:20:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:13:26.346 * Looking for test storage... 00:13:26.346 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:13:26.347 18:20:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:26.347 18:20:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:13:26.347 18:20:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:26.347 18:20:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:26.347 18:20:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:26.347 18:20:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:26.347 18:20:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:26.347 18:20:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:26.347 18:20:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:26.347 18:20:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:26.347 18:20:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:26.347 18:20:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:26.347 18:20:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:13:26.347 18:20:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=0b8484e2-e129-4a11-8748-0b3c728771da 00:13:26.347 18:20:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:26.347 18:20:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:26.347 18:20:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:26.347 18:20:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:26.347 18:20:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:26.347 18:20:38 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:26.347 18:20:38 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:26.347 18:20:38 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:26.347 18:20:38 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.347 18:20:38 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.347 18:20:38 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.347 18:20:38 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:13:26.347 18:20:38 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.347 18:20:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@47 -- # : 0 00:13:26.347 18:20:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:26.347 18:20:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:26.347 18:20:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:26.347 18:20:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:26.347 18:20:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:26.347 18:20:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:26.347 18:20:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:26.347 18:20:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:26.347 18:20:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:13:26.347 18:20:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:13:26.347 18:20:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:13:26.347 18:20:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:13:26.347 18:20:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:26.347 18:20:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:26.347 18:20:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:26.347 ************************************ 00:13:26.347 START TEST nvmf_example 00:13:26.347 ************************************ 00:13:26.347 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:13:26.607 * Looking for test storage... 00:13:26.607 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:26.607 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:26.607 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:13:26.607 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:26.607 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:26.607 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:26.607 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:26.607 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:26.607 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:26.607 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:26.607 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:26.607 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:26.607 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:26.607 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:13:26.607 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=0b8484e2-e129-4a11-8748-0b3c728771da 00:13:26.607 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:26.607 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:26.607 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:26.607 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:26.607 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:26.607 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:26.607 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:26.607 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:26.607 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.607 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.608 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.608 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:13:26.608 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.608 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:13:26.608 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:26.608 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:26.608 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:26.608 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:26.608 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:26.608 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:26.608 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:26.608 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:26.608 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:13:26.608 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:13:26.608 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:13:26.608 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:13:26.608 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:13:26.608 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:13:26.608 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:13:26.608 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:13:26.608 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:26.608 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:26.608 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:13:26.608 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:26.608 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:26.608 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:26.608 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:26.608 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:26.608 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:26.608 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:26.608 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:26.608 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:13:26.608 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:13:26.608 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:13:26.608 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:13:26.608 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:13:26.608 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # nvmf_veth_init 00:13:26.608 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:26.608 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:26.608 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:26.608 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:13:26.608 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:26.608 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:26.608 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:26.608 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:26.608 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:26.608 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:26.608 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:26.608 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:26.608 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:13:26.608 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:13:26.608 Cannot find device "nvmf_tgt_br" 00:13:26.608 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@155 -- # true 00:13:26.608 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:13:26.608 Cannot find device "nvmf_tgt_br2" 00:13:26.608 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@156 -- # true 00:13:26.608 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:13:26.608 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:13:26.608 Cannot find device "nvmf_tgt_br" 00:13:26.608 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@158 -- # true 00:13:26.608 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:13:26.608 Cannot find device "nvmf_tgt_br2" 00:13:26.608 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@159 -- # true 00:13:26.608 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:13:26.608 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:13:26.608 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:26.608 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:26.608 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@162 -- # true 00:13:26.608 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:26.608 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:26.608 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@163 -- # true 00:13:26.608 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:13:26.608 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:26.608 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:26.608 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:26.608 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:26.608 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:26.608 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:26.867 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:26.867 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:26.867 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:13:26.867 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:13:26.867 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:13:26.867 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:13:26.867 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:26.867 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:26.867 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:26.867 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:13:26.867 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:13:26.867 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:13:26.867 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:26.867 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:26.867 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:26.867 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:26.867 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:13:26.867 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:26.867 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.097 ms 00:13:26.867 00:13:26.867 --- 10.0.0.2 ping statistics --- 00:13:26.867 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:26.867 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:13:26.867 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:13:26.867 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:26.867 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.082 ms 00:13:26.867 00:13:26.867 --- 10.0.0.3 ping statistics --- 00:13:26.867 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:26.867 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:13:26.867 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:26.867 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:26.867 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:13:26.867 00:13:26.867 --- 10.0.0.1 ping statistics --- 00:13:26.867 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:26.867 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:13:26.867 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:26.867 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@433 -- # return 0 00:13:26.867 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:26.867 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:26.867 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:26.867 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:26.867 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:26.867 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:26.867 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:26.867 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:13:26.867 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:13:26.867 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:26.867 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:26.867 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:13:26.867 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:13:26.867 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=75901 00:13:26.867 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:13:26.868 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:26.868 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 75901 00:13:26.868 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@829 -- # '[' -z 75901 ']' 00:13:26.868 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:26.868 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:26.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:26.868 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:26.868 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:26.868 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:28.244 18:20:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:28.244 18:20:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@862 -- # return 0 00:13:28.244 18:20:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:13:28.244 18:20:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:28.244 18:20:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:28.244 18:20:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:28.244 18:20:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:28.244 18:20:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:28.244 18:20:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:28.244 18:20:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:13:28.244 18:20:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:28.244 18:20:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:28.244 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:28.244 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:13:28.244 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:28.244 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:28.244 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:28.244 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:28.244 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:13:28.244 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:28.244 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:28.245 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:28.245 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:28.245 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:28.245 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:28.245 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:28.245 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:28.245 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:13:28.245 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:13:40.446 Initializing NVMe Controllers 00:13:40.446 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:40.446 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:40.446 Initialization complete. Launching workers. 00:13:40.446 ======================================================== 00:13:40.446 Latency(us) 00:13:40.446 Device Information : IOPS MiB/s Average min max 00:13:40.446 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 12011.84 46.92 5327.50 1094.75 22989.92 00:13:40.446 ======================================================== 00:13:40.446 Total : 12011.84 46.92 5327.50 1094.75 22989.92 00:13:40.446 00:13:40.446 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:13:40.446 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:13:40.446 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:40.446 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # sync 00:13:40.446 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:40.446 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:13:40.446 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:40.446 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:40.446 rmmod nvme_tcp 00:13:40.446 rmmod nvme_fabrics 00:13:40.446 rmmod nvme_keyring 00:13:40.446 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:40.446 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:13:40.446 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:13:40.447 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 75901 ']' 00:13:40.447 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@490 -- # killprocess 75901 00:13:40.447 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@948 -- # '[' -z 75901 ']' 00:13:40.447 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@952 -- # kill -0 75901 00:13:40.447 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@953 -- # uname 00:13:40.447 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:40.447 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75901 00:13:40.447 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # process_name=nvmf 00:13:40.447 killing process with pid 75901 00:13:40.447 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # '[' nvmf = sudo ']' 00:13:40.447 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75901' 00:13:40.447 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@967 -- # kill 75901 00:13:40.447 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # wait 75901 00:13:40.447 nvmf threads initialize successfully 00:13:40.447 bdev subsystem init successfully 00:13:40.447 created a nvmf target service 00:13:40.447 create targets's poll groups done 00:13:40.447 all subsystems of target started 00:13:40.447 nvmf target is running 00:13:40.447 all subsystems of target stopped 00:13:40.447 destroy targets's poll groups done 00:13:40.447 destroyed the nvmf target service 00:13:40.447 bdev subsystem finish successfully 00:13:40.447 nvmf threads destroy successfully 00:13:40.447 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:40.447 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:40.447 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:40.447 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:40.447 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:40.447 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:40.447 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:40.447 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:40.447 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:13:40.447 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:13:40.447 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:40.447 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:40.447 00:13:40.447 real 0m13.516s 00:13:40.447 user 0m47.755s 00:13:40.447 sys 0m2.057s 00:13:40.447 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:40.447 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:40.447 ************************************ 00:13:40.447 END TEST nvmf_example 00:13:40.447 ************************************ 00:13:40.447 18:20:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:13:40.447 18:20:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:13:40.447 18:20:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:40.447 18:20:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:40.447 18:20:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:40.447 ************************************ 00:13:40.447 START TEST nvmf_filesystem 00:13:40.447 ************************************ 00:13:40.447 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:13:40.447 * Looking for test storage... 00:13:40.447 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:40.447 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:13:40.447 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:13:40.447 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:13:40.447 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:13:40.447 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:13:40.447 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:13:40.447 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:13:40.447 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:13:40.447 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:13:40.447 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:13:40.447 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:13:40.447 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:13:40.447 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:13:40.447 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:13:40.447 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:13:40.447 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:13:40.447 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:13:40.447 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:13:40.447 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:13:40.447 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:13:40.447 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:13:40.447 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:13:40.447 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:13:40.447 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:13:40.447 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:13:40.447 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:13:40.447 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:13:40.447 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:13:40.447 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:13:40.447 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:13:40.447 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:13:40.447 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:13:40.447 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:13:40.447 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:13:40.447 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:13:40.447 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:13:40.447 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:13:40.447 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:13:40.447 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:13:40.447 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:13:40.447 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:13:40.447 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:13:40.447 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:13:40.447 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:13:40.447 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:13:40.447 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:13:40.447 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:13:40.447 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:13:40.447 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:13:40.447 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:13:40.447 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:13:40.447 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:13:40.447 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:13:40.447 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:13:40.447 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:13:40.447 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:13:40.447 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:13:40.447 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:13:40.448 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:13:40.448 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:13:40.448 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:13:40.448 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:13:40.448 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:13:40.448 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:13:40.448 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:13:40.448 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:13:40.448 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:13:40.448 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:13:40.448 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=y 00:13:40.448 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:13:40.448 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:13:40.448 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:13:40.448 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:13:40.448 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:13:40.448 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:13:40.448 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:13:40.448 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:13:40.448 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:13:40.448 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:13:40.448 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=y 00:13:40.448 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:13:40.448 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:13:40.448 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:13:40.448 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:13:40.448 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:13:40.448 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:13:40.448 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:13:40.448 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:13:40.448 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:13:40.448 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:13:40.448 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:13:40.448 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:13:40.448 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:13:40.448 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:13:40.448 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:13:40.448 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:13:40.448 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:13:40.448 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:13:40.448 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:13:40.448 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:13:40.448 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:13:40.448 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:13:40.448 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:13:40.448 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:13:40.448 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:13:40.448 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:13:40.448 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:13:40.448 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:13:40.448 #define SPDK_CONFIG_H 00:13:40.448 #define SPDK_CONFIG_APPS 1 00:13:40.448 #define SPDK_CONFIG_ARCH native 00:13:40.448 #define SPDK_CONFIG_ASAN 1 00:13:40.448 #define SPDK_CONFIG_AVAHI 1 00:13:40.448 #undef SPDK_CONFIG_CET 00:13:40.448 #define SPDK_CONFIG_COVERAGE 1 00:13:40.448 #define SPDK_CONFIG_CROSS_PREFIX 00:13:40.448 #undef SPDK_CONFIG_CRYPTO 00:13:40.448 #undef SPDK_CONFIG_CRYPTO_MLX5 00:13:40.448 #undef SPDK_CONFIG_CUSTOMOCF 00:13:40.448 #undef SPDK_CONFIG_DAOS 00:13:40.448 #define SPDK_CONFIG_DAOS_DIR 00:13:40.448 #define SPDK_CONFIG_DEBUG 1 00:13:40.448 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:13:40.448 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:13:40.448 #define SPDK_CONFIG_DPDK_INC_DIR 00:13:40.448 #define SPDK_CONFIG_DPDK_LIB_DIR 00:13:40.448 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:13:40.448 #undef SPDK_CONFIG_DPDK_UADK 00:13:40.448 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:13:40.448 #define SPDK_CONFIG_EXAMPLES 1 00:13:40.448 #undef SPDK_CONFIG_FC 00:13:40.448 #define SPDK_CONFIG_FC_PATH 00:13:40.448 #define SPDK_CONFIG_FIO_PLUGIN 1 00:13:40.448 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:13:40.448 #undef SPDK_CONFIG_FUSE 00:13:40.448 #undef SPDK_CONFIG_FUZZER 00:13:40.448 #define SPDK_CONFIG_FUZZER_LIB 00:13:40.448 #define SPDK_CONFIG_GOLANG 1 00:13:40.448 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:13:40.448 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:13:40.448 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:13:40.448 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:13:40.448 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:13:40.448 #undef SPDK_CONFIG_HAVE_LIBBSD 00:13:40.448 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:13:40.448 #define SPDK_CONFIG_IDXD 1 00:13:40.448 #define SPDK_CONFIG_IDXD_KERNEL 1 00:13:40.448 #undef SPDK_CONFIG_IPSEC_MB 00:13:40.448 #define SPDK_CONFIG_IPSEC_MB_DIR 00:13:40.448 #define SPDK_CONFIG_ISAL 1 00:13:40.448 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:13:40.448 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:13:40.448 #define SPDK_CONFIG_LIBDIR 00:13:40.448 #undef SPDK_CONFIG_LTO 00:13:40.448 #define SPDK_CONFIG_MAX_LCORES 128 00:13:40.448 #define SPDK_CONFIG_NVME_CUSE 1 00:13:40.448 #undef SPDK_CONFIG_OCF 00:13:40.448 #define SPDK_CONFIG_OCF_PATH 00:13:40.448 #define SPDK_CONFIG_OPENSSL_PATH 00:13:40.448 #undef SPDK_CONFIG_PGO_CAPTURE 00:13:40.448 #define SPDK_CONFIG_PGO_DIR 00:13:40.448 #undef SPDK_CONFIG_PGO_USE 00:13:40.448 #define SPDK_CONFIG_PREFIX /usr/local 00:13:40.448 #undef SPDK_CONFIG_RAID5F 00:13:40.448 #undef SPDK_CONFIG_RBD 00:13:40.448 #define SPDK_CONFIG_RDMA 1 00:13:40.448 #define SPDK_CONFIG_RDMA_PROV verbs 00:13:40.448 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:13:40.448 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:13:40.448 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:13:40.448 #define SPDK_CONFIG_SHARED 1 00:13:40.448 #undef SPDK_CONFIG_SMA 00:13:40.448 #define SPDK_CONFIG_TESTS 1 00:13:40.448 #undef SPDK_CONFIG_TSAN 00:13:40.448 #define SPDK_CONFIG_UBLK 1 00:13:40.448 #define SPDK_CONFIG_UBSAN 1 00:13:40.448 #undef SPDK_CONFIG_UNIT_TESTS 00:13:40.448 #undef SPDK_CONFIG_URING 00:13:40.448 #define SPDK_CONFIG_URING_PATH 00:13:40.448 #undef SPDK_CONFIG_URING_ZNS 00:13:40.448 #define SPDK_CONFIG_USDT 1 00:13:40.448 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:13:40.448 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:13:40.448 #define SPDK_CONFIG_VFIO_USER 1 00:13:40.448 #define SPDK_CONFIG_VFIO_USER_DIR 00:13:40.448 #define SPDK_CONFIG_VHOST 1 00:13:40.448 #define SPDK_CONFIG_VIRTIO 1 00:13:40.448 #undef SPDK_CONFIG_VTUNE 00:13:40.448 #define SPDK_CONFIG_VTUNE_DIR 00:13:40.448 #define SPDK_CONFIG_WERROR 1 00:13:40.448 #define SPDK_CONFIG_WPDK_DIR 00:13:40.448 #undef SPDK_CONFIG_XNVME 00:13:40.448 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:13:40.448 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:13:40.448 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:40.448 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:40.448 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:40.448 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:40.449 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:40.449 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:40.449 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:40.449 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:13:40.449 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:40.449 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:13:40.449 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:13:40.449 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:13:40.449 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:13:40.449 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:13:40.449 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:13:40.449 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:13:40.449 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:13:40.449 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:13:40.449 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:13:40.449 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:13:40.449 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:13:40.449 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:13:40.449 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:13:40.449 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:13:40.449 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:13:40.449 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:13:40.449 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:13:40.449 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:13:40.449 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:13:40.449 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:13:40.449 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:13:40.449 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:13:40.449 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:13:40.449 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:13:40.449 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:13:40.449 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:13:40.449 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:13:40.449 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:13:40.449 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:13:40.449 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:13:40.449 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:13:40.449 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:13:40.449 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:13:40.449 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:13:40.449 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:13:40.449 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:13:40.449 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:13:40.449 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:13:40.449 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:13:40.449 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:13:40.449 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:13:40.449 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:13:40.449 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:13:40.449 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:13:40.449 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:13:40.449 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:13:40.449 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:13:40.449 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:13:40.449 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:13:40.449 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 0 00:13:40.449 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:13:40.449 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:13:40.449 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:13:40.449 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:13:40.449 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:13:40.449 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:13:40.449 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:13:40.449 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:13:40.449 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:13:40.449 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:13:40.449 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:13:40.449 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:13:40.449 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:13:40.449 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:13:40.449 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:13:40.449 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:13:40.449 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:13:40.449 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:13:40.449 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:13:40.449 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:13:40.449 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:13:40.449 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:13:40.449 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:13:40.449 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:13:40.449 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:13:40.449 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:13:40.449 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:13:40.449 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:13:40.449 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:13:40.449 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:13:40.449 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:13:40.449 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:13:40.449 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:13:40.450 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 1 00:13:40.450 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:13:40.450 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:13:40.450 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:13:40.450 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:13:40.450 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:13:40.450 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:13:40.450 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:13:40.450 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:13:40.450 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:13:40.450 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:13:40.450 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:13:40.450 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:13:40.450 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:13:40.450 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:13:40.450 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:13:40.450 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:13:40.450 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:13:40.450 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:13:40.450 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:13:40.450 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:13:40.450 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:13:40.450 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:13:40.450 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:13:40.450 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:13:40.450 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:13:40.450 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 1 00:13:40.450 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:13:40.450 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:13:40.450 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:13:40.450 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:13:40.450 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:13:40.450 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:13:40.450 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:13:40.450 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : 00:13:40.450 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:13:40.450 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:13:40.450 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:13:40.450 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:13:40.450 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:13:40.450 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:13:40.450 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:13:40.450 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:13:40.450 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:13:40.450 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:13:40.450 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:13:40.450 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:13:40.450 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:13:40.450 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 1 00:13:40.450 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:13:40.450 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 1 00:13:40.450 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:13:40.450 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:13:40.450 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:13:40.450 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:13:40.450 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:13:40.450 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:13:40.450 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:13:40.450 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:13:40.450 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:13:40.450 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:13:40.450 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:13:40.450 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:13:40.450 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:13:40.450 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:13:40.450 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:13:40.450 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:13:40.450 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:13:40.450 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:13:40.450 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:13:40.450 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:13:40.450 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:13:40.450 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:13:40.450 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:13:40.450 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:13:40.450 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:13:40.450 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:13:40.450 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:13:40.450 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:13:40.450 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:13:40.450 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:13:40.450 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:13:40.450 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:13:40.450 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:13:40.450 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:13:40.451 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:13:40.451 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:13:40.451 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:13:40.451 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:13:40.451 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:13:40.451 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:13:40.451 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:13:40.451 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:13:40.451 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:13:40.451 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:13:40.451 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:13:40.451 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:13:40.451 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:13:40.451 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:13:40.451 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:13:40.451 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:13:40.451 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:13:40.451 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:13:40.451 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j10 00:13:40.451 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:13:40.451 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:13:40.451 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:13:40.451 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:13:40.451 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:13:40.451 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:13:40.451 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:13:40.451 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 76156 ]] 00:13:40.451 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 76156 00:13:40.451 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:13:40.451 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:13:40.451 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:13:40.451 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:13:40.451 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:13:40.451 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:13:40.451 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:13:40.451 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:13:40.451 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.pu0MXZ 00:13:40.451 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:13:40.451 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:13:40.451 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:13:40.451 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvmf/target /tmp/spdk.pu0MXZ/tests/target /tmp/spdk.pu0MXZ 00:13:40.451 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:13:40.451 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:13:40.451 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:13:40.451 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:13:40.451 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=devtmpfs 00:13:40.451 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:13:40.451 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=4194304 00:13:40.451 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=4194304 00:13:40.451 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:13:40.451 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:13:40.451 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:13:40.451 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:13:40.451 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=6256631808 00:13:40.451 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=6267887616 00:13:40.451 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=11255808 00:13:40.451 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:13:40.451 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:13:40.451 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:13:40.451 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=2487009280 00:13:40.451 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=2507157504 00:13:40.451 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=20148224 00:13:40.451 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:13:40.451 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda5 00:13:40.451 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=btrfs 00:13:40.451 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=13763997696 00:13:40.451 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=20314062848 00:13:40.451 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=5266640896 00:13:40.451 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:13:40.451 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda5 00:13:40.451 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=btrfs 00:13:40.451 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=13763997696 00:13:40.451 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=20314062848 00:13:40.451 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=5266640896 00:13:40.451 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:13:40.451 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda2 00:13:40.452 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext4 00:13:40.452 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=843546624 00:13:40.452 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=1012768768 00:13:40.452 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=100016128 00:13:40.452 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:13:40.452 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:13:40.452 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:13:40.452 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=6267752448 00:13:40.452 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=6267891712 00:13:40.452 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=139264 00:13:40.452 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:13:40.452 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda3 00:13:40.452 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=vfat 00:13:40.452 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=92499968 00:13:40.452 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=104607744 00:13:40.452 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=12107776 00:13:40.452 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:13:40.452 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:13:40.452 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:13:40.452 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=1253572608 00:13:40.452 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=1253576704 00:13:40.452 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:13:40.452 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:13:40.452 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt/output 00:13:40.452 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=fuse.sshfs 00:13:40.452 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=93586743296 00:13:40.452 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=105088212992 00:13:40.452 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=6116036608 00:13:40.452 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:13:40.452 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:13:40.452 * Looking for test storage... 00:13:40.452 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:13:40.452 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:13:40.452 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:40.452 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:13:40.452 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/home 00:13:40.452 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=13763997696 00:13:40.452 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:13:40.452 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:13:40.452 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ btrfs == tmpfs ]] 00:13:40.452 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ btrfs == ramfs ]] 00:13:40.452 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ /home == / ]] 00:13:40.452 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:40.452 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:40.452 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:40.452 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:40.452 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:13:40.452 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:13:40.452 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:13:40.452 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:13:40.452 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:13:40.452 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:13:40.452 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:13:40.452 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:13:40.452 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:13:40.452 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:13:40.452 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:13:40.452 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:13:40.452 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:13:40.452 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:13:40.452 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:13:40.452 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:40.452 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:13:40.452 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:40.452 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:40.452 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:40.452 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:40.452 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:40.452 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:40.452 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:40.452 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:40.452 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:40.452 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:40.452 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:13:40.452 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=0b8484e2-e129-4a11-8748-0b3c728771da 00:13:40.452 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:40.452 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:40.452 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:40.452 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:40.452 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:40.452 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:40.452 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:40.452 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:40.452 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:40.452 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:40.452 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:40.452 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:13:40.453 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:40.453 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:13:40.453 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:40.453 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:40.453 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:40.453 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:40.453 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:40.453 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:40.453 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:40.453 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:40.453 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:13:40.453 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:40.453 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:13:40.453 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:40.453 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:40.453 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:40.453 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:40.453 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:40.453 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:40.453 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:40.453 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:40.453 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:13:40.453 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:13:40.453 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:13:40.453 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:13:40.453 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:13:40.453 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # nvmf_veth_init 00:13:40.453 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:40.453 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:40.453 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:40.453 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:13:40.453 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:40.453 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:40.453 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:40.453 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:40.453 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:40.453 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:40.453 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:40.453 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:40.453 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:13:40.453 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:13:40.453 Cannot find device "nvmf_tgt_br" 00:13:40.453 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@155 -- # true 00:13:40.453 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:13:40.453 Cannot find device "nvmf_tgt_br2" 00:13:40.453 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@156 -- # true 00:13:40.453 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:13:40.453 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:13:40.453 Cannot find device "nvmf_tgt_br" 00:13:40.453 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@158 -- # true 00:13:40.453 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:13:40.453 Cannot find device "nvmf_tgt_br2" 00:13:40.453 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@159 -- # true 00:13:40.453 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:13:40.453 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:13:40.453 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:40.453 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:40.453 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@162 -- # true 00:13:40.453 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:40.453 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:40.453 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@163 -- # true 00:13:40.453 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:13:40.453 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:40.453 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:40.453 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:40.453 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:40.453 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:40.453 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:40.453 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:40.453 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:40.453 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:13:40.453 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:13:40.453 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:13:40.453 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:13:40.453 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:40.453 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:40.453 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:40.453 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:13:40.453 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:13:40.453 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:13:40.453 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:40.453 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:40.453 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:40.453 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:40.453 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:13:40.711 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:40.711 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.095 ms 00:13:40.711 00:13:40.711 --- 10.0.0.2 ping statistics --- 00:13:40.711 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:40.711 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:13:40.711 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:13:40.711 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:40.711 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms 00:13:40.711 00:13:40.711 --- 10.0.0.3 ping statistics --- 00:13:40.711 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:40.711 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:13:40.711 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:40.711 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:40.711 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.045 ms 00:13:40.711 00:13:40.711 --- 10.0.0.1 ping statistics --- 00:13:40.711 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:40.711 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:13:40.711 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:40.711 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@433 -- # return 0 00:13:40.711 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:40.711 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:40.711 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:40.711 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:40.711 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:40.711 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:40.711 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:40.711 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:13:40.711 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:40.711 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:40.711 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:13:40.711 ************************************ 00:13:40.711 START TEST nvmf_filesystem_no_in_capsule 00:13:40.711 ************************************ 00:13:40.711 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 0 00:13:40.711 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:13:40.711 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:13:40.711 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:40.711 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:40.711 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:40.711 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=76315 00:13:40.711 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 76315 00:13:40.711 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:40.711 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 76315 ']' 00:13:40.711 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:40.711 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:40.711 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:40.711 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:40.711 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:40.711 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:40.711 [2024-07-22 18:20:52.618283] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:13:40.711 [2024-07-22 18:20:52.618447] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:40.969 [2024-07-22 18:20:52.792801] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:41.226 [2024-07-22 18:20:53.107373] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:41.226 [2024-07-22 18:20:53.107480] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:41.226 [2024-07-22 18:20:53.107502] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:41.226 [2024-07-22 18:20:53.107520] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:41.226 [2024-07-22 18:20:53.107533] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:41.226 [2024-07-22 18:20:53.107815] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:41.226 [2024-07-22 18:20:53.108203] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:41.226 [2024-07-22 18:20:53.108338] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:41.226 [2024-07-22 18:20:53.108343] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:41.861 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:41.861 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:13:41.861 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:41.861 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:41.861 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:41.861 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:41.861 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:13:41.861 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:13:41.861 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:41.861 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:41.861 [2024-07-22 18:20:53.612867] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:41.861 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:41.861 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:13:41.861 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:41.861 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:42.426 Malloc1 00:13:42.426 18:20:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:42.426 18:20:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:42.426 18:20:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:42.426 18:20:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:42.426 18:20:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:42.426 18:20:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:42.426 18:20:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:42.426 18:20:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:42.426 18:20:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:42.426 18:20:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:42.426 18:20:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:42.426 18:20:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:42.426 [2024-07-22 18:20:54.259381] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:42.426 18:20:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:42.426 18:20:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:13:42.426 18:20:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:13:42.426 18:20:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:13:42.426 18:20:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:13:42.426 18:20:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:13:42.426 18:20:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:13:42.426 18:20:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:42.426 18:20:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:42.426 18:20:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:42.426 18:20:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:13:42.426 { 00:13:42.426 "aliases": [ 00:13:42.426 "a2072663-b07e-4eeb-8f0c-4e5cd0de084a" 00:13:42.426 ], 00:13:42.426 "assigned_rate_limits": { 00:13:42.426 "r_mbytes_per_sec": 0, 00:13:42.426 "rw_ios_per_sec": 0, 00:13:42.426 "rw_mbytes_per_sec": 0, 00:13:42.426 "w_mbytes_per_sec": 0 00:13:42.426 }, 00:13:42.426 "block_size": 512, 00:13:42.426 "claim_type": "exclusive_write", 00:13:42.426 "claimed": true, 00:13:42.426 "driver_specific": {}, 00:13:42.426 "memory_domains": [ 00:13:42.426 { 00:13:42.426 "dma_device_id": "system", 00:13:42.426 "dma_device_type": 1 00:13:42.426 }, 00:13:42.426 { 00:13:42.426 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:42.426 "dma_device_type": 2 00:13:42.426 } 00:13:42.426 ], 00:13:42.426 "name": "Malloc1", 00:13:42.426 "num_blocks": 1048576, 00:13:42.426 "product_name": "Malloc disk", 00:13:42.426 "supported_io_types": { 00:13:42.426 "abort": true, 00:13:42.426 "compare": false, 00:13:42.426 "compare_and_write": false, 00:13:42.426 "copy": true, 00:13:42.426 "flush": true, 00:13:42.426 "get_zone_info": false, 00:13:42.426 "nvme_admin": false, 00:13:42.426 "nvme_io": false, 00:13:42.426 "nvme_io_md": false, 00:13:42.426 "nvme_iov_md": false, 00:13:42.426 "read": true, 00:13:42.426 "reset": true, 00:13:42.426 "seek_data": false, 00:13:42.426 "seek_hole": false, 00:13:42.426 "unmap": true, 00:13:42.426 "write": true, 00:13:42.426 "write_zeroes": true, 00:13:42.426 "zcopy": true, 00:13:42.426 "zone_append": false, 00:13:42.426 "zone_management": false 00:13:42.426 }, 00:13:42.426 "uuid": "a2072663-b07e-4eeb-8f0c-4e5cd0de084a", 00:13:42.426 "zoned": false 00:13:42.426 } 00:13:42.426 ]' 00:13:42.426 18:20:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:13:42.426 18:20:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:13:42.426 18:20:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:13:42.426 18:20:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:13:42.426 18:20:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:13:42.426 18:20:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:13:42.426 18:20:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:13:42.426 18:20:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --hostid=0b8484e2-e129-4a11-8748-0b3c728771da -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:42.683 18:20:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:13:42.684 18:20:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:13:42.684 18:20:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:42.684 18:20:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:42.684 18:20:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:13:44.586 18:20:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:44.586 18:20:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:44.586 18:20:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:44.586 18:20:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:44.586 18:20:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:44.586 18:20:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:13:44.586 18:20:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:13:44.586 18:20:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:13:44.845 18:20:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:13:44.845 18:20:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:13:44.845 18:20:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:13:44.845 18:20:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:13:44.845 18:20:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:13:44.845 18:20:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:13:44.845 18:20:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:13:44.845 18:20:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:13:44.845 18:20:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:13:44.845 18:20:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:13:44.845 18:20:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:13:45.782 18:20:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:13:45.782 18:20:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:13:45.782 18:20:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:13:45.782 18:20:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:45.782 18:20:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:45.782 ************************************ 00:13:45.782 START TEST filesystem_ext4 00:13:45.782 ************************************ 00:13:45.782 18:20:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:13:45.782 18:20:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:13:45.782 18:20:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:45.782 18:20:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:13:45.782 18:20:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:13:45.782 18:20:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:13:45.782 18:20:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:13:45.782 18:20:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local force 00:13:45.782 18:20:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:13:45.782 18:20:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:13:45.782 18:20:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:13:45.782 mke2fs 1.46.5 (30-Dec-2021) 00:13:46.040 Discarding device blocks: 0/522240 done 00:13:46.040 Creating filesystem with 522240 1k blocks and 130560 inodes 00:13:46.040 Filesystem UUID: 4c902b02-0ff2-480d-baa6-f0061a62dce3 00:13:46.040 Superblock backups stored on blocks: 00:13:46.040 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:13:46.040 00:13:46.040 Allocating group tables: 0/64 done 00:13:46.040 Writing inode tables: 0/64 done 00:13:46.040 Creating journal (8192 blocks): done 00:13:46.040 Writing superblocks and filesystem accounting information: 0/64 done 00:13:46.041 00:13:46.041 18:20:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@943 -- # return 0 00:13:46.041 18:20:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:46.041 18:20:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:46.299 18:20:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:13:46.299 18:20:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:46.299 18:20:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:13:46.299 18:20:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:13:46.299 18:20:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:46.299 18:20:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 76315 00:13:46.299 18:20:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:46.299 18:20:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:46.299 18:20:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:46.299 18:20:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:46.299 ************************************ 00:13:46.299 END TEST filesystem_ext4 00:13:46.299 ************************************ 00:13:46.299 00:13:46.299 real 0m0.400s 00:13:46.299 user 0m0.027s 00:13:46.299 sys 0m0.060s 00:13:46.299 18:20:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:46.299 18:20:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:13:46.299 18:20:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:13:46.299 18:20:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:13:46.299 18:20:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:13:46.299 18:20:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:46.299 18:20:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:46.299 ************************************ 00:13:46.299 START TEST filesystem_btrfs 00:13:46.299 ************************************ 00:13:46.299 18:20:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:13:46.299 18:20:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:13:46.299 18:20:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:46.299 18:20:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:13:46.299 18:20:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:13:46.299 18:20:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:13:46.299 18:20:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:13:46.299 18:20:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local force 00:13:46.299 18:20:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:13:46.300 18:20:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:13:46.300 18:20:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:13:46.558 btrfs-progs v6.6.2 00:13:46.558 See https://btrfs.readthedocs.io for more information. 00:13:46.558 00:13:46.558 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:13:46.558 NOTE: several default settings have changed in version 5.15, please make sure 00:13:46.558 this does not affect your deployments: 00:13:46.558 - DUP for metadata (-m dup) 00:13:46.558 - enabled no-holes (-O no-holes) 00:13:46.558 - enabled free-space-tree (-R free-space-tree) 00:13:46.558 00:13:46.558 Label: (null) 00:13:46.558 UUID: 772a7bb0-4387-4d89-8361-710e4c55131e 00:13:46.558 Node size: 16384 00:13:46.558 Sector size: 4096 00:13:46.558 Filesystem size: 510.00MiB 00:13:46.558 Block group profiles: 00:13:46.558 Data: single 8.00MiB 00:13:46.558 Metadata: DUP 32.00MiB 00:13:46.558 System: DUP 8.00MiB 00:13:46.558 SSD detected: yes 00:13:46.558 Zoned device: no 00:13:46.558 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:13:46.558 Runtime features: free-space-tree 00:13:46.558 Checksum: crc32c 00:13:46.558 Number of devices: 1 00:13:46.558 Devices: 00:13:46.558 ID SIZE PATH 00:13:46.558 1 510.00MiB /dev/nvme0n1p1 00:13:46.558 00:13:46.558 18:20:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@943 -- # return 0 00:13:46.558 18:20:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:46.558 18:20:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:46.558 18:20:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:13:46.558 18:20:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:46.558 18:20:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:13:46.558 18:20:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:13:46.558 18:20:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:46.558 18:20:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 76315 00:13:46.558 18:20:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:46.558 18:20:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:46.558 18:20:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:46.558 18:20:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:46.558 ************************************ 00:13:46.558 END TEST filesystem_btrfs 00:13:46.558 ************************************ 00:13:46.558 00:13:46.558 real 0m0.317s 00:13:46.558 user 0m0.018s 00:13:46.558 sys 0m0.072s 00:13:46.558 18:20:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:46.558 18:20:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:13:46.816 18:20:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:13:46.816 18:20:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:13:46.816 18:20:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:13:46.816 18:20:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:46.816 18:20:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:46.816 ************************************ 00:13:46.816 START TEST filesystem_xfs 00:13:46.816 ************************************ 00:13:46.816 18:20:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:13:46.816 18:20:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:13:46.816 18:20:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:46.816 18:20:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:13:46.816 18:20:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:13:46.816 18:20:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:13:46.816 18:20:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local i=0 00:13:46.816 18:20:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local force 00:13:46.816 18:20:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:13:46.816 18:20:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # force=-f 00:13:46.816 18:20:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:13:46.816 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:13:46.816 = sectsz=512 attr=2, projid32bit=1 00:13:46.816 = crc=1 finobt=1, sparse=1, rmapbt=0 00:13:46.816 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:13:46.816 data = bsize=4096 blocks=130560, imaxpct=25 00:13:46.816 = sunit=0 swidth=0 blks 00:13:46.816 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:13:46.816 log =internal log bsize=4096 blocks=16384, version=2 00:13:46.816 = sectsz=512 sunit=0 blks, lazy-count=1 00:13:46.816 realtime =none extsz=4096 blocks=0, rtextents=0 00:13:47.751 Discarding blocks...Done. 00:13:47.751 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@943 -- # return 0 00:13:47.751 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:50.283 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:50.283 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:13:50.283 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:50.283 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:13:50.283 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:13:50.283 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:50.283 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 76315 00:13:50.283 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:50.283 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:50.283 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:50.283 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:50.283 ************************************ 00:13:50.283 END TEST filesystem_xfs 00:13:50.283 ************************************ 00:13:50.283 00:13:50.283 real 0m3.272s 00:13:50.283 user 0m0.023s 00:13:50.283 sys 0m0.052s 00:13:50.283 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:50.284 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:13:50.284 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:13:50.284 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:13:50.284 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:13:50.284 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:50.284 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:50.284 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:50.284 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:13:50.284 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:50.284 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:50.284 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:50.284 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:50.284 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:13:50.284 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:50.284 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:50.284 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:50.284 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:50.284 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:50.284 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 76315 00:13:50.284 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 76315 ']' 00:13:50.284 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # kill -0 76315 00:13:50.284 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # uname 00:13:50.284 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:50.284 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76315 00:13:50.284 killing process with pid 76315 00:13:50.284 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:50.284 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:50.284 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76315' 00:13:50.284 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # kill 76315 00:13:50.284 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # wait 76315 00:13:52.872 18:21:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:13:52.872 00:13:52.872 real 0m12.147s 00:13:52.872 user 0m43.958s 00:13:52.872 sys 0m1.807s 00:13:52.872 18:21:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:52.872 ************************************ 00:13:52.873 END TEST nvmf_filesystem_no_in_capsule 00:13:52.873 ************************************ 00:13:52.873 18:21:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:52.873 18:21:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:13:52.873 18:21:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:13:52.873 18:21:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:52.873 18:21:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:52.873 18:21:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:13:52.873 ************************************ 00:13:52.873 START TEST nvmf_filesystem_in_capsule 00:13:52.873 ************************************ 00:13:52.873 18:21:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 4096 00:13:52.873 18:21:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:13:52.873 18:21:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:13:52.873 18:21:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:52.873 18:21:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:52.873 18:21:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:52.873 18:21:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=76655 00:13:52.873 18:21:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 76655 00:13:52.873 18:21:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 76655 ']' 00:13:52.873 18:21:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:52.873 18:21:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:52.873 18:21:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:52.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:52.873 18:21:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:52.873 18:21:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:52.873 18:21:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:52.873 [2024-07-22 18:21:04.863432] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:13:52.873 [2024-07-22 18:21:04.864419] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:53.132 [2024-07-22 18:21:05.048068] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:53.390 [2024-07-22 18:21:05.368174] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:53.390 [2024-07-22 18:21:05.368506] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:53.390 [2024-07-22 18:21:05.368598] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:53.390 [2024-07-22 18:21:05.368686] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:53.390 [2024-07-22 18:21:05.368760] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:53.390 [2024-07-22 18:21:05.369108] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:53.390 [2024-07-22 18:21:05.370044] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:53.390 [2024-07-22 18:21:05.370123] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:53.390 [2024-07-22 18:21:05.370124] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:53.960 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:53.960 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:13:53.960 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:53.960 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:53.960 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:53.960 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:53.960 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:13:53.960 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:13:53.960 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:53.960 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:53.960 [2024-07-22 18:21:05.838699] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:53.960 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:53.960 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:13:53.960 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:53.960 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:54.526 Malloc1 00:13:54.526 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:54.526 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:54.526 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:54.526 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:54.526 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:54.526 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:54.526 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:54.526 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:54.526 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:54.526 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:54.526 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:54.526 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:54.526 [2024-07-22 18:21:06.500318] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:54.526 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:54.526 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:13:54.526 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:13:54.526 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:13:54.526 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:13:54.526 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:13:54.526 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:13:54.526 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:54.526 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:54.526 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:54.526 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:13:54.526 { 00:13:54.526 "aliases": [ 00:13:54.526 "45bfc515-d2fc-4a1f-94f4-a0858bd8fdcb" 00:13:54.526 ], 00:13:54.526 "assigned_rate_limits": { 00:13:54.526 "r_mbytes_per_sec": 0, 00:13:54.526 "rw_ios_per_sec": 0, 00:13:54.526 "rw_mbytes_per_sec": 0, 00:13:54.526 "w_mbytes_per_sec": 0 00:13:54.526 }, 00:13:54.526 "block_size": 512, 00:13:54.526 "claim_type": "exclusive_write", 00:13:54.526 "claimed": true, 00:13:54.526 "driver_specific": {}, 00:13:54.526 "memory_domains": [ 00:13:54.526 { 00:13:54.526 "dma_device_id": "system", 00:13:54.526 "dma_device_type": 1 00:13:54.526 }, 00:13:54.526 { 00:13:54.526 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:54.526 "dma_device_type": 2 00:13:54.526 } 00:13:54.526 ], 00:13:54.526 "name": "Malloc1", 00:13:54.526 "num_blocks": 1048576, 00:13:54.526 "product_name": "Malloc disk", 00:13:54.526 "supported_io_types": { 00:13:54.526 "abort": true, 00:13:54.526 "compare": false, 00:13:54.526 "compare_and_write": false, 00:13:54.526 "copy": true, 00:13:54.526 "flush": true, 00:13:54.526 "get_zone_info": false, 00:13:54.526 "nvme_admin": false, 00:13:54.526 "nvme_io": false, 00:13:54.526 "nvme_io_md": false, 00:13:54.526 "nvme_iov_md": false, 00:13:54.526 "read": true, 00:13:54.526 "reset": true, 00:13:54.526 "seek_data": false, 00:13:54.526 "seek_hole": false, 00:13:54.526 "unmap": true, 00:13:54.526 "write": true, 00:13:54.526 "write_zeroes": true, 00:13:54.526 "zcopy": true, 00:13:54.526 "zone_append": false, 00:13:54.526 "zone_management": false 00:13:54.526 }, 00:13:54.526 "uuid": "45bfc515-d2fc-4a1f-94f4-a0858bd8fdcb", 00:13:54.526 "zoned": false 00:13:54.526 } 00:13:54.526 ]' 00:13:54.526 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:13:54.784 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:13:54.784 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:13:54.784 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:13:54.784 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:13:54.784 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:13:54.784 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:13:54.784 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --hostid=0b8484e2-e129-4a11-8748-0b3c728771da -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:54.784 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:13:54.784 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:13:54.784 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:54.784 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:54.784 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:13:57.313 18:21:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:57.313 18:21:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:57.313 18:21:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:57.313 18:21:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:57.313 18:21:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:57.313 18:21:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:13:57.313 18:21:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:13:57.313 18:21:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:13:57.313 18:21:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:13:57.313 18:21:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:13:57.313 18:21:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:13:57.313 18:21:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:13:57.313 18:21:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:13:57.313 18:21:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:13:57.313 18:21:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:13:57.313 18:21:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:13:57.313 18:21:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:13:57.313 18:21:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:13:57.313 18:21:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:13:58.248 18:21:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:13:58.248 18:21:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:13:58.248 18:21:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:13:58.248 18:21:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:58.248 18:21:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:58.248 ************************************ 00:13:58.248 START TEST filesystem_in_capsule_ext4 00:13:58.248 ************************************ 00:13:58.248 18:21:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:13:58.248 18:21:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:13:58.248 18:21:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:58.248 18:21:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:13:58.248 18:21:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:13:58.248 18:21:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:13:58.248 18:21:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:13:58.248 18:21:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local force 00:13:58.248 18:21:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:13:58.248 18:21:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:13:58.248 18:21:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:13:58.248 mke2fs 1.46.5 (30-Dec-2021) 00:13:58.248 Discarding device blocks: 0/522240 done 00:13:58.248 Creating filesystem with 522240 1k blocks and 130560 inodes 00:13:58.248 Filesystem UUID: c5601706-81aa-4fc1-8f1a-eb124393f9ad 00:13:58.248 Superblock backups stored on blocks: 00:13:58.248 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:13:58.248 00:13:58.248 Allocating group tables: 0/64 done 00:13:58.248 Writing inode tables: 0/64 done 00:13:58.248 Creating journal (8192 blocks): done 00:13:58.248 Writing superblocks and filesystem accounting information: 0/64 done 00:13:58.248 00:13:58.248 18:21:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@943 -- # return 0 00:13:58.248 18:21:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:58.248 18:21:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:58.506 18:21:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:13:58.506 18:21:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:58.506 18:21:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:13:58.506 18:21:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:13:58.506 18:21:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:58.507 18:21:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 76655 00:13:58.507 18:21:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:58.507 18:21:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:58.507 18:21:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:58.507 18:21:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:58.507 ************************************ 00:13:58.507 END TEST filesystem_in_capsule_ext4 00:13:58.507 ************************************ 00:13:58.507 00:13:58.507 real 0m0.374s 00:13:58.507 user 0m0.023s 00:13:58.507 sys 0m0.064s 00:13:58.507 18:21:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:58.507 18:21:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:13:58.507 18:21:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:13:58.507 18:21:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:13:58.507 18:21:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:13:58.507 18:21:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:58.507 18:21:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:58.507 ************************************ 00:13:58.507 START TEST filesystem_in_capsule_btrfs 00:13:58.507 ************************************ 00:13:58.507 18:21:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:13:58.507 18:21:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:13:58.507 18:21:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:58.507 18:21:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:13:58.507 18:21:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:13:58.507 18:21:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:13:58.507 18:21:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:13:58.507 18:21:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local force 00:13:58.507 18:21:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:13:58.507 18:21:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:13:58.507 18:21:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:13:58.765 btrfs-progs v6.6.2 00:13:58.765 See https://btrfs.readthedocs.io for more information. 00:13:58.765 00:13:58.765 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:13:58.765 NOTE: several default settings have changed in version 5.15, please make sure 00:13:58.765 this does not affect your deployments: 00:13:58.765 - DUP for metadata (-m dup) 00:13:58.765 - enabled no-holes (-O no-holes) 00:13:58.765 - enabled free-space-tree (-R free-space-tree) 00:13:58.765 00:13:58.765 Label: (null) 00:13:58.765 UUID: ef670c9f-68f3-41e2-b749-b9a3696028dd 00:13:58.765 Node size: 16384 00:13:58.765 Sector size: 4096 00:13:58.765 Filesystem size: 510.00MiB 00:13:58.765 Block group profiles: 00:13:58.765 Data: single 8.00MiB 00:13:58.765 Metadata: DUP 32.00MiB 00:13:58.765 System: DUP 8.00MiB 00:13:58.765 SSD detected: yes 00:13:58.765 Zoned device: no 00:13:58.765 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:13:58.765 Runtime features: free-space-tree 00:13:58.765 Checksum: crc32c 00:13:58.765 Number of devices: 1 00:13:58.765 Devices: 00:13:58.765 ID SIZE PATH 00:13:58.765 1 510.00MiB /dev/nvme0n1p1 00:13:58.765 00:13:58.765 18:21:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@943 -- # return 0 00:13:58.765 18:21:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:58.765 18:21:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:58.765 18:21:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:13:58.765 18:21:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:58.765 18:21:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:13:58.765 18:21:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:13:58.765 18:21:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:58.765 18:21:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 76655 00:13:58.765 18:21:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:58.765 18:21:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:58.765 18:21:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:58.765 18:21:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:58.765 ************************************ 00:13:58.765 END TEST filesystem_in_capsule_btrfs 00:13:58.765 ************************************ 00:13:58.765 00:13:58.765 real 0m0.248s 00:13:58.765 user 0m0.016s 00:13:58.765 sys 0m0.072s 00:13:58.765 18:21:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:58.765 18:21:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:13:58.765 18:21:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:13:58.765 18:21:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:13:58.765 18:21:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:13:58.765 18:21:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:58.765 18:21:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:58.765 ************************************ 00:13:58.765 START TEST filesystem_in_capsule_xfs 00:13:58.765 ************************************ 00:13:58.765 18:21:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:13:58.765 18:21:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:13:58.765 18:21:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:58.765 18:21:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:13:58.765 18:21:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:13:58.765 18:21:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:13:58.765 18:21:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local i=0 00:13:58.765 18:21:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local force 00:13:58.765 18:21:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:13:58.765 18:21:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # force=-f 00:13:58.765 18:21:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:13:59.023 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:13:59.023 = sectsz=512 attr=2, projid32bit=1 00:13:59.023 = crc=1 finobt=1, sparse=1, rmapbt=0 00:13:59.023 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:13:59.023 data = bsize=4096 blocks=130560, imaxpct=25 00:13:59.023 = sunit=0 swidth=0 blks 00:13:59.023 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:13:59.023 log =internal log bsize=4096 blocks=16384, version=2 00:13:59.023 = sectsz=512 sunit=0 blks, lazy-count=1 00:13:59.023 realtime =none extsz=4096 blocks=0, rtextents=0 00:13:59.590 Discarding blocks...Done. 00:13:59.590 18:21:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@943 -- # return 0 00:13:59.590 18:21:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:14:01.519 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:14:01.519 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:14:01.519 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:14:01.519 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:14:01.519 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:14:01.519 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:14:01.519 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 76655 00:14:01.519 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:14:01.519 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:14:01.519 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:14:01.519 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:14:01.519 ************************************ 00:14:01.519 END TEST filesystem_in_capsule_xfs 00:14:01.519 ************************************ 00:14:01.519 00:14:01.519 real 0m2.715s 00:14:01.519 user 0m0.017s 00:14:01.519 sys 0m0.065s 00:14:01.519 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:01.519 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:14:01.519 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:14:01.519 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:14:01.519 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:14:01.519 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:01.519 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:01.519 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:01.519 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:14:01.519 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:01.519 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:01.778 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:01.779 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:01.779 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:14:01.779 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:01.779 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:01.779 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:01.779 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:01.779 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:01.779 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 76655 00:14:01.779 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 76655 ']' 00:14:01.779 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # kill -0 76655 00:14:01.779 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # uname 00:14:01.779 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:01.779 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76655 00:14:01.779 killing process with pid 76655 00:14:01.779 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:01.779 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:01.779 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76655' 00:14:01.779 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # kill 76655 00:14:01.779 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # wait 76655 00:14:04.310 ************************************ 00:14:04.310 END TEST nvmf_filesystem_in_capsule 00:14:04.310 ************************************ 00:14:04.310 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:14:04.310 00:14:04.310 real 0m11.450s 00:14:04.310 user 0m41.292s 00:14:04.310 sys 0m1.806s 00:14:04.310 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:04.310 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:04.310 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:14:04.310 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:14:04.310 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:04.310 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:14:04.310 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:04.310 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:14:04.310 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:04.311 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:04.311 rmmod nvme_tcp 00:14:04.311 rmmod nvme_fabrics 00:14:04.311 rmmod nvme_keyring 00:14:04.311 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:04.311 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:14:04.311 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:14:04.311 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:14:04.311 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:04.311 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:04.311 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:04.311 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:04.311 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:04.311 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:04.311 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:04.311 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:04.570 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:04.570 00:14:04.570 real 0m24.470s 00:14:04.570 user 1m25.515s 00:14:04.570 sys 0m4.020s 00:14:04.570 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:04.570 ************************************ 00:14:04.570 END TEST nvmf_filesystem 00:14:04.570 ************************************ 00:14:04.570 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:14:04.570 18:21:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:14:04.570 18:21:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:14:04.570 18:21:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:04.570 18:21:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:04.570 18:21:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:04.570 ************************************ 00:14:04.570 START TEST nvmf_target_discovery 00:14:04.570 ************************************ 00:14:04.570 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:14:04.570 * Looking for test storage... 00:14:04.570 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:04.570 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:04.570 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:14:04.570 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:04.570 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:04.570 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:04.570 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:04.570 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:04.570 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:04.570 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:04.570 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:04.570 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:04.570 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:04.570 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:14:04.570 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=0b8484e2-e129-4a11-8748-0b3c728771da 00:14:04.570 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:04.570 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:04.570 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:04.570 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:04.570 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:04.570 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:04.570 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:04.570 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:04.570 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:04.570 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:04.570 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:04.570 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:14:04.570 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:04.570 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:14:04.570 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:04.570 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:04.570 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:04.570 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:04.570 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:04.570 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:04.570 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:04.570 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:04.570 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:14:04.570 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:14:04.570 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:14:04.570 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:14:04.570 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:14:04.570 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:04.570 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:04.570 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:04.570 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:04.570 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:04.570 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:04.570 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:04.570 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:04.570 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:04.570 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:04.570 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:04.571 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:04.571 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:04.571 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:04.571 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:04.571 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:04.571 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:04.571 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:04.571 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:04.571 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:04.571 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:04.571 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:04.571 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:04.571 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:04.571 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:04.571 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:04.571 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:04.571 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:04.571 Cannot find device "nvmf_tgt_br" 00:14:04.571 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@155 -- # true 00:14:04.571 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:04.571 Cannot find device "nvmf_tgt_br2" 00:14:04.571 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@156 -- # true 00:14:04.571 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:04.571 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:04.571 Cannot find device "nvmf_tgt_br" 00:14:04.571 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@158 -- # true 00:14:04.571 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:04.571 Cannot find device "nvmf_tgt_br2" 00:14:04.571 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@159 -- # true 00:14:04.571 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:04.829 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:04.829 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:04.829 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:04.829 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@162 -- # true 00:14:04.829 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:04.829 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:04.829 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@163 -- # true 00:14:04.829 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:04.829 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:04.829 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:04.829 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:04.829 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:04.829 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:04.829 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:04.829 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:04.829 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:04.829 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:04.829 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:04.829 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:04.829 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:04.829 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:04.829 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:04.829 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:04.829 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:04.829 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:04.829 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:04.829 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:04.829 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:04.829 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:04.829 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:05.087 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:05.087 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:05.087 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.083 ms 00:14:05.087 00:14:05.087 --- 10.0.0.2 ping statistics --- 00:14:05.087 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:05.087 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:14:05.087 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:05.087 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:05.087 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:14:05.087 00:14:05.087 --- 10.0.0.3 ping statistics --- 00:14:05.087 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:05.087 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:14:05.087 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:05.087 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:05.087 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:14:05.087 00:14:05.087 --- 10.0.0.1 ping statistics --- 00:14:05.087 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:05.087 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:14:05.087 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:05.087 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@433 -- # return 0 00:14:05.087 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:05.087 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:05.087 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:05.088 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:05.088 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:05.088 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:05.088 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:05.088 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:14:05.088 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:05.088 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:05.088 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:05.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:05.088 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=77159 00:14:05.088 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 77159 00:14:05.088 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:05.088 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@829 -- # '[' -z 77159 ']' 00:14:05.088 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:05.088 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:05.088 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:05.088 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:05.088 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:05.088 [2024-07-22 18:21:17.020964] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:14:05.088 [2024-07-22 18:21:17.021745] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:05.346 [2024-07-22 18:21:17.206809] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:05.604 [2024-07-22 18:21:17.495765] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:05.604 [2024-07-22 18:21:17.495867] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:05.604 [2024-07-22 18:21:17.495885] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:05.604 [2024-07-22 18:21:17.495900] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:05.604 [2024-07-22 18:21:17.495912] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:05.604 [2024-07-22 18:21:17.496198] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:05.604 [2024-07-22 18:21:17.497030] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:05.604 [2024-07-22 18:21:17.497166] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:05.604 [2024-07-22 18:21:17.497248] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:06.170 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:06.170 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@862 -- # return 0 00:14:06.170 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:06.170 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:06.170 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:06.170 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:06.170 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:06.170 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.170 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:06.170 [2024-07-22 18:21:18.120582] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:06.170 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.170 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:14:06.170 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:14:06.170 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:14:06.170 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.170 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:06.170 Null1 00:14:06.170 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.170 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:06.170 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.170 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:06.170 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.170 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:14:06.170 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.170 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:06.170 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.170 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:06.170 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.170 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:06.170 [2024-07-22 18:21:18.168939] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:06.170 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.170 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:14:06.170 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:14:06.170 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.170 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:06.170 Null2 00:14:06.170 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.170 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:14:06.170 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.170 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:06.430 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.430 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:14:06.430 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.430 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:06.430 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.430 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:14:06.430 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.430 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:06.430 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.430 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:14:06.430 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:14:06.430 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.430 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:06.430 Null3 00:14:06.430 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.430 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:14:06.430 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.430 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:06.430 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.430 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:14:06.430 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.430 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:06.430 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.430 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:14:06.430 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.430 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:06.430 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.430 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:14:06.430 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:14:06.430 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.430 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:06.430 Null4 00:14:06.430 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.430 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:14:06.430 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.430 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:06.430 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.430 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:14:06.430 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.430 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:06.430 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.430 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:14:06.430 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.430 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:06.430 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.430 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:06.430 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.430 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:06.430 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.430 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:14:06.430 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.430 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:06.430 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.430 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --hostid=0b8484e2-e129-4a11-8748-0b3c728771da -t tcp -a 10.0.0.2 -s 4420 00:14:06.430 00:14:06.430 Discovery Log Number of Records 6, Generation counter 6 00:14:06.430 =====Discovery Log Entry 0====== 00:14:06.430 trtype: tcp 00:14:06.430 adrfam: ipv4 00:14:06.430 subtype: current discovery subsystem 00:14:06.430 treq: not required 00:14:06.430 portid: 0 00:14:06.430 trsvcid: 4420 00:14:06.430 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:06.430 traddr: 10.0.0.2 00:14:06.430 eflags: explicit discovery connections, duplicate discovery information 00:14:06.430 sectype: none 00:14:06.430 =====Discovery Log Entry 1====== 00:14:06.430 trtype: tcp 00:14:06.430 adrfam: ipv4 00:14:06.430 subtype: nvme subsystem 00:14:06.430 treq: not required 00:14:06.430 portid: 0 00:14:06.430 trsvcid: 4420 00:14:06.430 subnqn: nqn.2016-06.io.spdk:cnode1 00:14:06.430 traddr: 10.0.0.2 00:14:06.430 eflags: none 00:14:06.430 sectype: none 00:14:06.430 =====Discovery Log Entry 2====== 00:14:06.430 trtype: tcp 00:14:06.430 adrfam: ipv4 00:14:06.430 subtype: nvme subsystem 00:14:06.430 treq: not required 00:14:06.430 portid: 0 00:14:06.430 trsvcid: 4420 00:14:06.430 subnqn: nqn.2016-06.io.spdk:cnode2 00:14:06.430 traddr: 10.0.0.2 00:14:06.430 eflags: none 00:14:06.430 sectype: none 00:14:06.430 =====Discovery Log Entry 3====== 00:14:06.430 trtype: tcp 00:14:06.430 adrfam: ipv4 00:14:06.430 subtype: nvme subsystem 00:14:06.430 treq: not required 00:14:06.430 portid: 0 00:14:06.430 trsvcid: 4420 00:14:06.430 subnqn: nqn.2016-06.io.spdk:cnode3 00:14:06.430 traddr: 10.0.0.2 00:14:06.430 eflags: none 00:14:06.430 sectype: none 00:14:06.430 =====Discovery Log Entry 4====== 00:14:06.430 trtype: tcp 00:14:06.430 adrfam: ipv4 00:14:06.430 subtype: nvme subsystem 00:14:06.430 treq: not required 00:14:06.430 portid: 0 00:14:06.430 trsvcid: 4420 00:14:06.430 subnqn: nqn.2016-06.io.spdk:cnode4 00:14:06.430 traddr: 10.0.0.2 00:14:06.430 eflags: none 00:14:06.430 sectype: none 00:14:06.430 =====Discovery Log Entry 5====== 00:14:06.430 trtype: tcp 00:14:06.430 adrfam: ipv4 00:14:06.430 subtype: discovery subsystem referral 00:14:06.430 treq: not required 00:14:06.430 portid: 0 00:14:06.430 trsvcid: 4430 00:14:06.430 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:06.430 traddr: 10.0.0.2 00:14:06.430 eflags: none 00:14:06.430 sectype: none 00:14:06.430 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:14:06.430 Perform nvmf subsystem discovery via RPC 00:14:06.430 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:14:06.431 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.431 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:06.431 [ 00:14:06.431 { 00:14:06.431 "allow_any_host": true, 00:14:06.431 "hosts": [], 00:14:06.431 "listen_addresses": [ 00:14:06.431 { 00:14:06.431 "adrfam": "IPv4", 00:14:06.431 "traddr": "10.0.0.2", 00:14:06.431 "trsvcid": "4420", 00:14:06.431 "trtype": "TCP" 00:14:06.431 } 00:14:06.431 ], 00:14:06.431 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:06.431 "subtype": "Discovery" 00:14:06.431 }, 00:14:06.431 { 00:14:06.431 "allow_any_host": true, 00:14:06.431 "hosts": [], 00:14:06.431 "listen_addresses": [ 00:14:06.431 { 00:14:06.431 "adrfam": "IPv4", 00:14:06.431 "traddr": "10.0.0.2", 00:14:06.431 "trsvcid": "4420", 00:14:06.431 "trtype": "TCP" 00:14:06.431 } 00:14:06.431 ], 00:14:06.431 "max_cntlid": 65519, 00:14:06.431 "max_namespaces": 32, 00:14:06.431 "min_cntlid": 1, 00:14:06.431 "model_number": "SPDK bdev Controller", 00:14:06.431 "namespaces": [ 00:14:06.431 { 00:14:06.431 "bdev_name": "Null1", 00:14:06.431 "name": "Null1", 00:14:06.431 "nguid": "DA9E6D06F9CF4147A21F319C5CF03EAB", 00:14:06.431 "nsid": 1, 00:14:06.431 "uuid": "da9e6d06-f9cf-4147-a21f-319c5cf03eab" 00:14:06.431 } 00:14:06.431 ], 00:14:06.431 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:06.431 "serial_number": "SPDK00000000000001", 00:14:06.431 "subtype": "NVMe" 00:14:06.431 }, 00:14:06.431 { 00:14:06.431 "allow_any_host": true, 00:14:06.431 "hosts": [], 00:14:06.431 "listen_addresses": [ 00:14:06.431 { 00:14:06.431 "adrfam": "IPv4", 00:14:06.431 "traddr": "10.0.0.2", 00:14:06.431 "trsvcid": "4420", 00:14:06.431 "trtype": "TCP" 00:14:06.431 } 00:14:06.431 ], 00:14:06.431 "max_cntlid": 65519, 00:14:06.431 "max_namespaces": 32, 00:14:06.431 "min_cntlid": 1, 00:14:06.431 "model_number": "SPDK bdev Controller", 00:14:06.431 "namespaces": [ 00:14:06.431 { 00:14:06.431 "bdev_name": "Null2", 00:14:06.431 "name": "Null2", 00:14:06.431 "nguid": "C0940CFB076C4D8B8ABA1397920526F3", 00:14:06.431 "nsid": 1, 00:14:06.431 "uuid": "c0940cfb-076c-4d8b-8aba-1397920526f3" 00:14:06.431 } 00:14:06.431 ], 00:14:06.431 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:14:06.431 "serial_number": "SPDK00000000000002", 00:14:06.431 "subtype": "NVMe" 00:14:06.431 }, 00:14:06.431 { 00:14:06.431 "allow_any_host": true, 00:14:06.431 "hosts": [], 00:14:06.431 "listen_addresses": [ 00:14:06.431 { 00:14:06.431 "adrfam": "IPv4", 00:14:06.431 "traddr": "10.0.0.2", 00:14:06.431 "trsvcid": "4420", 00:14:06.431 "trtype": "TCP" 00:14:06.431 } 00:14:06.431 ], 00:14:06.431 "max_cntlid": 65519, 00:14:06.431 "max_namespaces": 32, 00:14:06.431 "min_cntlid": 1, 00:14:06.431 "model_number": "SPDK bdev Controller", 00:14:06.431 "namespaces": [ 00:14:06.431 { 00:14:06.431 "bdev_name": "Null3", 00:14:06.431 "name": "Null3", 00:14:06.431 "nguid": "9D06425FDCFF4D54A9DC278CB0124C9C", 00:14:06.431 "nsid": 1, 00:14:06.431 "uuid": "9d06425f-dcff-4d54-a9dc-278cb0124c9c" 00:14:06.431 } 00:14:06.431 ], 00:14:06.431 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:14:06.431 "serial_number": "SPDK00000000000003", 00:14:06.431 "subtype": "NVMe" 00:14:06.431 }, 00:14:06.431 { 00:14:06.431 "allow_any_host": true, 00:14:06.431 "hosts": [], 00:14:06.431 "listen_addresses": [ 00:14:06.431 { 00:14:06.431 "adrfam": "IPv4", 00:14:06.431 "traddr": "10.0.0.2", 00:14:06.431 "trsvcid": "4420", 00:14:06.431 "trtype": "TCP" 00:14:06.431 } 00:14:06.431 ], 00:14:06.431 "max_cntlid": 65519, 00:14:06.431 "max_namespaces": 32, 00:14:06.431 "min_cntlid": 1, 00:14:06.431 "model_number": "SPDK bdev Controller", 00:14:06.431 "namespaces": [ 00:14:06.431 { 00:14:06.431 "bdev_name": "Null4", 00:14:06.431 "name": "Null4", 00:14:06.431 "nguid": "705D78CE540B47EFAC22754D9660F344", 00:14:06.431 "nsid": 1, 00:14:06.431 "uuid": "705d78ce-540b-47ef-ac22-754d9660f344" 00:14:06.431 } 00:14:06.431 ], 00:14:06.431 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:14:06.431 "serial_number": "SPDK00000000000004", 00:14:06.431 "subtype": "NVMe" 00:14:06.431 } 00:14:06.431 ] 00:14:06.431 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.431 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:14:06.431 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:14:06.431 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:06.431 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.431 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:06.431 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.431 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:14:06.431 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.431 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:06.431 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.431 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:14:06.431 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:14:06.431 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.431 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:06.431 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.431 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:14:06.431 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.431 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:06.431 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.431 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:14:06.431 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:14:06.431 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.431 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:06.431 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.431 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:14:06.431 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.431 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:06.431 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.690 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:14:06.690 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:14:06.690 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.690 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:06.690 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.690 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:14:06.691 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.691 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:06.691 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.691 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:14:06.691 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.691 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:06.691 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.691 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:14:06.691 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.691 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:06.691 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:14:06.691 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.691 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:14:06.691 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:14:06.691 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:14:06.691 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:14:06.691 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:06.691 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:14:06.691 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:06.691 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:14:06.691 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:06.691 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:06.691 rmmod nvme_tcp 00:14:06.691 rmmod nvme_fabrics 00:14:06.691 rmmod nvme_keyring 00:14:06.691 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:06.691 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:14:06.691 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:14:06.691 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 77159 ']' 00:14:06.691 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 77159 00:14:06.691 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@948 -- # '[' -z 77159 ']' 00:14:06.691 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@952 -- # kill -0 77159 00:14:06.691 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@953 -- # uname 00:14:06.691 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:06.691 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77159 00:14:06.691 killing process with pid 77159 00:14:06.691 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:06.691 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:06.691 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77159' 00:14:06.691 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@967 -- # kill 77159 00:14:06.691 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # wait 77159 00:14:08.065 18:21:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:08.065 18:21:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:08.065 18:21:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:08.065 18:21:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:08.065 18:21:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:08.065 18:21:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:08.065 18:21:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:08.065 18:21:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:08.065 18:21:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:08.065 00:14:08.065 real 0m3.510s 00:14:08.065 user 0m8.825s 00:14:08.065 sys 0m0.842s 00:14:08.065 ************************************ 00:14:08.065 END TEST nvmf_target_discovery 00:14:08.065 ************************************ 00:14:08.065 18:21:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:08.065 18:21:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:08.065 18:21:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:14:08.065 18:21:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:14:08.065 18:21:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:08.065 18:21:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:08.065 18:21:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:08.065 ************************************ 00:14:08.065 START TEST nvmf_referrals 00:14:08.065 ************************************ 00:14:08.065 18:21:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:14:08.065 * Looking for test storage... 00:14:08.065 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:08.065 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:08.065 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:14:08.065 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:08.065 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:08.065 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:08.065 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:08.065 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:08.065 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:08.065 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:08.065 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:08.065 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:08.065 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:08.065 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:14:08.065 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=0b8484e2-e129-4a11-8748-0b3c728771da 00:14:08.065 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:08.065 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:08.065 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:08.065 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:08.065 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:08.065 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:08.065 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:08.065 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:08.065 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:08.065 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:08.065 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:08.065 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:14:08.065 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:08.065 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:14:08.065 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:08.065 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:08.065 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:08.065 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:08.065 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:08.065 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:08.065 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:08.065 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:08.065 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:14:08.065 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:14:08.065 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:14:08.065 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:14:08.065 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:14:08.065 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:14:08.065 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:14:08.066 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:08.066 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:08.066 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:08.066 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:08.066 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:08.066 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:08.066 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:08.066 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:08.066 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:08.066 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:08.066 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:08.066 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:08.066 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:08.066 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:08.066 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:08.066 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:08.066 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:08.066 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:08.066 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:08.066 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:08.066 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:08.066 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:08.066 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:08.066 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:08.066 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:08.066 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:08.066 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:08.324 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:08.324 Cannot find device "nvmf_tgt_br" 00:14:08.324 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@155 -- # true 00:14:08.324 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:08.324 Cannot find device "nvmf_tgt_br2" 00:14:08.324 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@156 -- # true 00:14:08.324 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:08.324 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:08.324 Cannot find device "nvmf_tgt_br" 00:14:08.324 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@158 -- # true 00:14:08.324 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:08.324 Cannot find device "nvmf_tgt_br2" 00:14:08.324 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@159 -- # true 00:14:08.324 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:08.324 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:08.324 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:08.324 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:08.324 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@162 -- # true 00:14:08.324 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:08.324 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:08.324 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@163 -- # true 00:14:08.324 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:08.324 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:08.324 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:08.324 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:08.324 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:08.324 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:08.324 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:08.324 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:08.324 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:08.324 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:08.324 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:08.324 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:08.588 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:08.588 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:08.588 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:08.588 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:08.588 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:08.588 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:08.588 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:08.588 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:08.588 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:08.588 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:08.588 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:08.588 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:08.588 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:08.588 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.089 ms 00:14:08.588 00:14:08.588 --- 10.0.0.2 ping statistics --- 00:14:08.588 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:08.588 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:14:08.588 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:08.588 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:08.588 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:14:08.588 00:14:08.588 --- 10.0.0.3 ping statistics --- 00:14:08.588 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:08.588 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:14:08.588 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:08.588 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:08.588 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.044 ms 00:14:08.588 00:14:08.588 --- 10.0.0.1 ping statistics --- 00:14:08.588 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:08.588 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:14:08.588 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:08.588 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@433 -- # return 0 00:14:08.588 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:08.588 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:08.588 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:08.588 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:08.588 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:08.588 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:08.588 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:08.588 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:14:08.588 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:08.588 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:08.588 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:08.588 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=77394 00:14:08.588 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 77394 00:14:08.588 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:08.588 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@829 -- # '[' -z 77394 ']' 00:14:08.588 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:08.588 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:08.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:08.588 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:08.588 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:08.588 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:08.855 [2024-07-22 18:21:20.610218] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:14:08.855 [2024-07-22 18:21:20.610489] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:08.855 [2024-07-22 18:21:20.819427] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:09.119 [2024-07-22 18:21:21.096625] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:09.119 [2024-07-22 18:21:21.096693] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:09.119 [2024-07-22 18:21:21.096711] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:09.119 [2024-07-22 18:21:21.096725] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:09.119 [2024-07-22 18:21:21.096737] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:09.119 [2024-07-22 18:21:21.097007] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:09.119 [2024-07-22 18:21:21.097712] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:09.119 [2024-07-22 18:21:21.097935] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:09.119 [2024-07-22 18:21:21.097947] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:09.695 18:21:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:09.695 18:21:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@862 -- # return 0 00:14:09.695 18:21:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:09.695 18:21:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:09.695 18:21:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:09.695 18:21:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:09.695 18:21:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:09.695 18:21:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:09.695 18:21:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:09.695 [2024-07-22 18:21:21.555101] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:09.695 18:21:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:09.695 18:21:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:14:09.695 18:21:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:09.695 18:21:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:09.695 [2024-07-22 18:21:21.571360] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:14:09.695 18:21:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:09.695 18:21:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:14:09.695 18:21:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:09.695 18:21:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:09.695 18:21:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:09.695 18:21:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:14:09.695 18:21:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:09.695 18:21:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:09.695 18:21:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:09.696 18:21:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:14:09.696 18:21:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:09.696 18:21:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:09.696 18:21:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:09.696 18:21:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:09.696 18:21:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:09.696 18:21:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:09.696 18:21:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:14:09.696 18:21:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:09.696 18:21:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:14:09.696 18:21:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:14:09.696 18:21:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:14:09.696 18:21:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:09.696 18:21:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:09.696 18:21:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:09.696 18:21:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:14:09.696 18:21:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:14:09.696 18:21:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:09.955 18:21:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:14:09.955 18:21:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:14:09.955 18:21:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:14:09.955 18:21:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:09.955 18:21:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:09.955 18:21:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --hostid=0b8484e2-e129-4a11-8748-0b3c728771da -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:09.955 18:21:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:09.955 18:21:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:14:09.955 18:21:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:14:09.955 18:21:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:14:09.955 18:21:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:14:09.955 18:21:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:09.955 18:21:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:09.955 18:21:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:09.955 18:21:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:14:09.955 18:21:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:09.955 18:21:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:09.955 18:21:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:09.955 18:21:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:14:09.955 18:21:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:09.955 18:21:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:09.955 18:21:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:09.955 18:21:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:09.955 18:21:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:09.955 18:21:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:14:09.955 18:21:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:09.955 18:21:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:09.955 18:21:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:14:09.955 18:21:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:14:09.955 18:21:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:09.955 18:21:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:09.955 18:21:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --hostid=0b8484e2-e129-4a11-8748-0b3c728771da -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:09.955 18:21:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:09.955 18:21:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:14:10.214 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:14:10.214 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:14:10.214 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:14:10.214 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:10.214 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:10.214 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:10.214 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:14:10.214 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:10.214 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:10.214 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:10.214 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:14:10.214 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:14:10.214 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:10.214 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:10.214 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:14:10.214 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:10.214 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:14:10.214 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:10.214 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:14:10.214 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:14:10.214 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:14:10.214 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:10.214 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:10.214 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --hostid=0b8484e2-e129-4a11-8748-0b3c728771da -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:10.214 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:10.214 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:14:10.214 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:14:10.214 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:14:10.214 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:14:10.214 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:14:10.214 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:14:10.214 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:14:10.214 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --hostid=0b8484e2-e129-4a11-8748-0b3c728771da -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:10.473 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:14:10.473 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:14:10.473 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:14:10.473 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:14:10.473 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --hostid=0b8484e2-e129-4a11-8748-0b3c728771da -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:10.473 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:14:10.473 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:14:10.473 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:14:10.473 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:10.473 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:10.473 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:10.473 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:14:10.473 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:14:10.473 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:10.473 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:14:10.473 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:10.473 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:10.473 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:14:10.473 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:10.473 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:14:10.473 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:14:10.473 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:14:10.473 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:10.473 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:10.473 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --hostid=0b8484e2-e129-4a11-8748-0b3c728771da -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:10.473 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:10.473 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:14:10.473 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:14:10.473 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:14:10.473 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:14:10.473 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:14:10.473 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:14:10.473 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --hostid=0b8484e2-e129-4a11-8748-0b3c728771da -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:10.473 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:14:10.732 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:14:10.732 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:14:10.732 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:14:10.732 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:14:10.732 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --hostid=0b8484e2-e129-4a11-8748-0b3c728771da -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:10.732 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:14:10.732 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:14:10.732 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:14:10.732 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:10.732 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:10.732 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:10.732 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:14:10.732 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:10.732 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:10.732 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:10.732 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:10.732 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:14:10.732 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:14:10.732 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:10.732 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:10.732 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --hostid=0b8484e2-e129-4a11-8748-0b3c728771da -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:10.732 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:10.732 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:14:10.732 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:14:10.732 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:14:10.732 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:14:10.732 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:14:10.732 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:10.732 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:14:10.991 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:10.991 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:14:10.991 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:10.991 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:10.991 rmmod nvme_tcp 00:14:10.991 rmmod nvme_fabrics 00:14:10.991 rmmod nvme_keyring 00:14:10.991 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:10.991 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:14:10.991 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:14:10.991 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 77394 ']' 00:14:10.991 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 77394 00:14:10.991 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@948 -- # '[' -z 77394 ']' 00:14:10.991 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@952 -- # kill -0 77394 00:14:10.991 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@953 -- # uname 00:14:10.991 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:10.991 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77394 00:14:10.991 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:10.991 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:10.991 killing process with pid 77394 00:14:10.991 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77394' 00:14:10.991 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@967 -- # kill 77394 00:14:10.991 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # wait 77394 00:14:12.381 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:12.381 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:12.381 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:12.381 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:12.381 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:12.381 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:12.381 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:12.381 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:12.381 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:12.381 00:14:12.381 real 0m4.220s 00:14:12.381 user 0m12.113s 00:14:12.381 sys 0m1.078s 00:14:12.381 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:12.381 ************************************ 00:14:12.381 END TEST nvmf_referrals 00:14:12.381 ************************************ 00:14:12.381 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:12.381 18:21:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:14:12.381 18:21:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:14:12.381 18:21:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:12.381 18:21:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:12.381 18:21:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:12.381 ************************************ 00:14:12.381 START TEST nvmf_connect_disconnect 00:14:12.381 ************************************ 00:14:12.381 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:14:12.381 * Looking for test storage... 00:14:12.381 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:12.381 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:12.381 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:14:12.381 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:12.381 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:12.381 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:12.381 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:12.381 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:12.381 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:12.381 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:12.381 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:12.381 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:12.381 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:12.381 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:14:12.381 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=0b8484e2-e129-4a11-8748-0b3c728771da 00:14:12.381 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:12.381 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:12.381 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:12.381 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:12.381 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:12.381 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:12.381 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:12.381 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:12.381 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:12.381 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:12.381 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:12.381 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:14:12.381 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:12.381 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:14:12.381 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:12.381 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:12.381 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:12.381 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:12.381 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:12.381 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:12.381 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:12.381 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:12.381 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:12.381 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:12.381 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:14:12.381 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:12.381 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:12.381 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:12.381 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:12.381 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:12.381 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:12.381 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:12.381 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:12.381 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:12.381 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:12.381 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:12.381 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:12.381 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:12.381 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:12.381 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:12.381 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:12.381 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:12.381 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:12.381 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:12.381 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:12.381 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:12.381 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:12.381 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:12.381 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:12.381 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:12.381 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:12.381 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:12.381 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:12.381 Cannot find device "nvmf_tgt_br" 00:14:12.381 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@155 -- # true 00:14:12.381 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:12.381 Cannot find device "nvmf_tgt_br2" 00:14:12.381 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@156 -- # true 00:14:12.381 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:12.381 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:12.640 Cannot find device "nvmf_tgt_br" 00:14:12.640 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@158 -- # true 00:14:12.640 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:12.640 Cannot find device "nvmf_tgt_br2" 00:14:12.640 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@159 -- # true 00:14:12.640 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:12.640 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:12.640 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:12.640 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:12.640 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@162 -- # true 00:14:12.640 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:12.640 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:12.640 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@163 -- # true 00:14:12.640 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:12.640 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:12.640 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:12.640 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:12.640 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:12.640 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:12.640 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:12.640 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:12.640 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:12.640 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:12.640 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:12.640 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:12.640 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:12.640 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:12.640 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:12.640 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:12.640 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:12.640 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:12.640 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:12.640 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:12.640 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:12.640 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:12.640 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:12.640 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:12.898 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:12.898 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.114 ms 00:14:12.898 00:14:12.898 --- 10.0.0.2 ping statistics --- 00:14:12.898 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:12.898 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:14:12.898 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:12.898 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:12.898 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:14:12.898 00:14:12.898 --- 10.0.0.3 ping statistics --- 00:14:12.898 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:12.898 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:14:12.898 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:12.898 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:12.898 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:14:12.898 00:14:12.898 --- 10.0.0.1 ping statistics --- 00:14:12.898 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:12.898 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:14:12.898 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:12.898 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@433 -- # return 0 00:14:12.898 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:12.898 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:12.898 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:12.898 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:12.898 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:12.898 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:12.898 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:12.898 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:14:12.898 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:12.898 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:12.898 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:12.898 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=77703 00:14:12.898 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:12.899 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 77703 00:14:12.899 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@829 -- # '[' -z 77703 ']' 00:14:12.899 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:12.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:12.899 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:12.899 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:12.899 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:12.899 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:12.899 [2024-07-22 18:21:24.802268] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:14:12.899 [2024-07-22 18:21:24.802443] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:13.156 [2024-07-22 18:21:24.976429] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:13.414 [2024-07-22 18:21:25.227461] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:13.414 [2024-07-22 18:21:25.227566] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:13.414 [2024-07-22 18:21:25.227586] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:13.414 [2024-07-22 18:21:25.227601] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:13.414 [2024-07-22 18:21:25.227613] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:13.414 [2024-07-22 18:21:25.227956] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:13.414 [2024-07-22 18:21:25.228623] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:13.414 [2024-07-22 18:21:25.228858] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:13.414 [2024-07-22 18:21:25.228883] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:13.980 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:13.980 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # return 0 00:14:13.980 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:13.980 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:13.980 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:13.980 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:13.980 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:14:13.980 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:13.980 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:13.980 [2024-07-22 18:21:25.843163] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:13.980 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:13.980 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:14:13.980 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:13.980 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:13.980 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:13.980 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:14:13.980 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:13.980 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:13.980 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:13.980 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:13.980 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:13.980 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:13.980 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:13.980 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:13.980 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:13.980 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:13.980 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:13.980 [2024-07-22 18:21:25.961966] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:13.980 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:13.980 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:14:13.980 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:14:13.980 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:14:13.980 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:14:16.508 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:18.409 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:20.938 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:23.469 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:25.370 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:27.900 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:29.799 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:32.370 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:34.281 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:36.812 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:39.343 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:41.242 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:43.773 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:45.694 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:48.222 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:50.125 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:52.652 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:55.182 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:57.141 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:59.672 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:01.573 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:04.103 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:06.009 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:08.540 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:11.069 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:12.969 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:15.498 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:17.399 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:19.929 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:22.457 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:24.354 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:26.384 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:28.912 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:31.455 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:33.357 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:35.971 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:37.872 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:40.402 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:42.436 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:44.986 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:46.888 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:49.416 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:51.317 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:53.843 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:56.413 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:58.310 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:00.843 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:02.740 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:05.269 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:07.799 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:09.703 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:12.231 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:14.130 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:16.662 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:18.563 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:21.198 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:23.098 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:25.653 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:28.181 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:30.076 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:32.669 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:34.566 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:37.099 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:39.629 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:41.528 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:44.056 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:46.585 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:48.488 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:51.038 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:52.950 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:55.477 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:58.006 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:59.905 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:02.434 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:04.332 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:06.895 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:09.437 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:11.338 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:13.866 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:15.815 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:18.354 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:20.321 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:22.848 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:24.774 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:27.358 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:29.258 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:31.809 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:34.338 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:36.237 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:38.766 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:40.665 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:43.193 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:45.158 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:47.686 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:50.267 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:52.163 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:54.695 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:56.613 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:59.147 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:01.102 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:01.102 18:25:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:18:01.102 18:25:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:18:01.102 18:25:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:01.102 18:25:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:18:01.102 18:25:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:01.102 18:25:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:18:01.102 18:25:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:01.102 18:25:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:01.102 rmmod nvme_tcp 00:18:01.102 rmmod nvme_fabrics 00:18:01.102 rmmod nvme_keyring 00:18:01.102 18:25:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:01.102 18:25:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:18:01.102 18:25:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:18:01.102 18:25:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 77703 ']' 00:18:01.102 18:25:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 77703 00:18:01.102 18:25:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # '[' -z 77703 ']' 00:18:01.102 18:25:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # kill -0 77703 00:18:01.102 18:25:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # uname 00:18:01.102 18:25:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:01.102 18:25:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77703 00:18:01.102 killing process with pid 77703 00:18:01.102 18:25:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:01.102 18:25:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:01.102 18:25:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77703' 00:18:01.102 18:25:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # kill 77703 00:18:01.102 18:25:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # wait 77703 00:18:02.478 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:02.478 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:02.478 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:02.478 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:02.478 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:02.478 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:02.478 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:02.478 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:02.478 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:02.478 00:18:02.478 real 3m50.133s 00:18:02.478 user 14m54.401s 00:18:02.478 sys 0m20.576s 00:18:02.478 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:02.478 ************************************ 00:18:02.478 END TEST nvmf_connect_disconnect 00:18:02.478 ************************************ 00:18:02.478 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:18:02.478 18:25:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:18:02.478 18:25:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:18:02.478 18:25:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:02.478 18:25:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:02.478 18:25:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:02.478 ************************************ 00:18:02.478 START TEST nvmf_multitarget 00:18:02.478 ************************************ 00:18:02.478 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:18:02.478 * Looking for test storage... 00:18:02.737 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:02.737 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:02.737 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:18:02.737 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:02.737 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:02.737 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:02.737 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:02.737 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:02.737 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:02.737 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:02.737 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:02.737 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:02.737 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:02.737 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:18:02.737 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=0b8484e2-e129-4a11-8748-0b3c728771da 00:18:02.737 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:02.737 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:02.737 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:02.737 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:02.737 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:02.737 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:02.737 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:02.737 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:02.737 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:02.737 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:02.737 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:02.737 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:18:02.737 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:02.737 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:18:02.737 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:02.737 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:02.737 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:02.737 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:02.737 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:02.737 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:02.737 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:02.737 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:02.737 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:18:02.737 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:18:02.737 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:02.737 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:02.737 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:02.737 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:02.737 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:02.737 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:02.737 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:02.737 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:02.737 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:02.737 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:02.737 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:02.737 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:02.737 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:02.737 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:02.737 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:02.737 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:02.737 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:02.737 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:02.737 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:02.737 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:02.737 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:02.737 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:02.737 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:02.737 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:02.737 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:02.737 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:02.737 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:02.737 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:02.737 Cannot find device "nvmf_tgt_br" 00:18:02.737 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@155 -- # true 00:18:02.737 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:02.737 Cannot find device "nvmf_tgt_br2" 00:18:02.737 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@156 -- # true 00:18:02.737 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:02.737 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:02.737 Cannot find device "nvmf_tgt_br" 00:18:02.738 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@158 -- # true 00:18:02.738 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:02.738 Cannot find device "nvmf_tgt_br2" 00:18:02.738 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@159 -- # true 00:18:02.738 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:02.738 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:02.738 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:02.738 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:02.738 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@162 -- # true 00:18:02.738 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:02.738 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:02.738 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@163 -- # true 00:18:02.738 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:02.738 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:02.738 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:02.738 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:02.738 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:02.738 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:02.738 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:02.738 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:02.738 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:02.738 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:02.738 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:02.738 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:02.738 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:02.999 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:02.999 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:02.999 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:02.999 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:02.999 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:02.999 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:02.999 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:02.999 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:02.999 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:02.999 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:02.999 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:02.999 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:02.999 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.109 ms 00:18:02.999 00:18:02.999 --- 10.0.0.2 ping statistics --- 00:18:02.999 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:02.999 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:18:02.999 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:02.999 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:02.999 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:18:02.999 00:18:02.999 --- 10.0.0.3 ping statistics --- 00:18:02.999 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:02.999 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:18:02.999 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:02.999 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:02.999 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 00:18:02.999 00:18:02.999 --- 10.0.0.1 ping statistics --- 00:18:02.999 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:02.999 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:18:02.999 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:02.999 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@433 -- # return 0 00:18:02.999 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:02.999 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:02.999 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:02.999 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:02.999 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:02.999 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:02.999 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:02.999 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:18:02.999 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:02.999 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:02.999 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:18:02.999 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=81451 00:18:02.999 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:02.999 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 81451 00:18:02.999 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@829 -- # '[' -z 81451 ']' 00:18:02.999 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:02.999 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:02.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:03.000 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:03.000 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:03.000 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:18:03.258 [2024-07-22 18:25:15.021973] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:18:03.258 [2024-07-22 18:25:15.022176] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:03.258 [2024-07-22 18:25:15.205999] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:03.516 [2024-07-22 18:25:15.512198] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:03.516 [2024-07-22 18:25:15.512268] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:03.516 [2024-07-22 18:25:15.512285] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:03.516 [2024-07-22 18:25:15.512302] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:03.516 [2024-07-22 18:25:15.512316] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:03.516 [2024-07-22 18:25:15.512560] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:03.516 [2024-07-22 18:25:15.513145] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:03.516 [2024-07-22 18:25:15.513405] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:03.516 [2024-07-22 18:25:15.513418] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:18:04.079 18:25:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:04.079 18:25:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@862 -- # return 0 00:18:04.079 18:25:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:04.079 18:25:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:04.079 18:25:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:18:04.079 18:25:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:04.080 18:25:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:18:04.080 18:25:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:18:04.080 18:25:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:18:04.080 18:25:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:18:04.080 18:25:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:18:04.337 "nvmf_tgt_1" 00:18:04.337 18:25:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:18:04.337 "nvmf_tgt_2" 00:18:04.594 18:25:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:18:04.594 18:25:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:18:04.594 18:25:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:18:04.594 18:25:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:18:04.594 true 00:18:04.851 18:25:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:18:04.851 true 00:18:04.851 18:25:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:18:04.851 18:25:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:18:05.108 18:25:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:18:05.108 18:25:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:18:05.108 18:25:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:18:05.108 18:25:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:05.108 18:25:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:18:05.108 18:25:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:05.108 18:25:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:18:05.108 18:25:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:05.108 18:25:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:05.108 rmmod nvme_tcp 00:18:05.108 rmmod nvme_fabrics 00:18:05.108 rmmod nvme_keyring 00:18:05.108 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:05.108 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:18:05.108 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:18:05.108 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 81451 ']' 00:18:05.108 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 81451 00:18:05.108 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@948 -- # '[' -z 81451 ']' 00:18:05.108 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@952 -- # kill -0 81451 00:18:05.108 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@953 -- # uname 00:18:05.108 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:05.108 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 81451 00:18:05.108 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:05.108 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:05.108 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@966 -- # echo 'killing process with pid 81451' 00:18:05.108 killing process with pid 81451 00:18:05.108 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@967 -- # kill 81451 00:18:05.108 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # wait 81451 00:18:06.482 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:06.482 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:06.482 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:06.482 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:06.482 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:06.482 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:06.483 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:06.483 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:06.483 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:06.483 ************************************ 00:18:06.483 END TEST nvmf_multitarget 00:18:06.483 ************************************ 00:18:06.483 00:18:06.483 real 0m3.973s 00:18:06.483 user 0m11.412s 00:18:06.483 sys 0m0.911s 00:18:06.483 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:06.483 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:18:06.483 18:25:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:18:06.483 18:25:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:18:06.483 18:25:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:06.483 18:25:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:06.483 18:25:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:06.483 ************************************ 00:18:06.483 START TEST nvmf_rpc 00:18:06.483 ************************************ 00:18:06.483 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:18:06.741 * Looking for test storage... 00:18:06.741 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:06.741 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:06.741 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:18:06.741 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:06.741 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:06.741 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:06.741 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:06.741 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:06.741 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:06.741 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:06.741 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:06.741 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:06.741 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:06.742 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:18:06.742 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=0b8484e2-e129-4a11-8748-0b3c728771da 00:18:06.742 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:06.742 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:06.742 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:06.742 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:06.742 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:06.742 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:06.742 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:06.742 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:06.742 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:06.742 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:06.742 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:06.742 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:18:06.742 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:06.742 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:18:06.742 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:06.742 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:06.742 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:06.742 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:06.742 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:06.742 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:06.742 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:06.742 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:06.742 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:18:06.742 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:18:06.742 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:06.742 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:06.742 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:06.742 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:06.742 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:06.742 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:06.742 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:06.742 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:06.742 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:06.742 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:06.742 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:06.742 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:06.742 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:06.742 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:06.742 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:06.742 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:06.742 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:06.742 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:06.742 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:06.742 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:06.742 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:06.742 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:06.742 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:06.742 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:06.742 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:06.742 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:06.742 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:06.742 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:06.742 Cannot find device "nvmf_tgt_br" 00:18:06.742 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@155 -- # true 00:18:06.742 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:06.742 Cannot find device "nvmf_tgt_br2" 00:18:06.742 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@156 -- # true 00:18:06.742 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:06.742 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:06.742 Cannot find device "nvmf_tgt_br" 00:18:06.742 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@158 -- # true 00:18:06.742 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:06.742 Cannot find device "nvmf_tgt_br2" 00:18:06.742 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@159 -- # true 00:18:06.742 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:06.742 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:06.742 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:06.742 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:06.742 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@162 -- # true 00:18:06.742 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:06.742 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:06.742 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@163 -- # true 00:18:06.742 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:06.742 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:06.742 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:06.742 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:07.001 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:07.001 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:07.001 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:07.001 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:07.001 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:07.001 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:07.001 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:07.001 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:07.001 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:07.001 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:07.001 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:07.001 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:07.001 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:07.001 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:07.001 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:07.001 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:07.001 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:07.001 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:07.001 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:07.001 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:07.001 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:07.001 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.081 ms 00:18:07.001 00:18:07.001 --- 10.0.0.2 ping statistics --- 00:18:07.001 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:07.001 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:18:07.001 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:07.001 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:07.001 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.086 ms 00:18:07.002 00:18:07.002 --- 10.0.0.3 ping statistics --- 00:18:07.002 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:07.002 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:18:07.002 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:07.002 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:07.002 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:18:07.002 00:18:07.002 --- 10.0.0.1 ping statistics --- 00:18:07.002 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:07.002 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:18:07.002 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:07.002 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@433 -- # return 0 00:18:07.002 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:07.002 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:07.002 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:07.002 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:07.002 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:07.002 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:07.002 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:07.002 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:18:07.002 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:07.002 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:07.002 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:07.002 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=81692 00:18:07.002 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:07.002 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 81692 00:18:07.002 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@829 -- # '[' -z 81692 ']' 00:18:07.002 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:07.002 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:07.002 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:07.002 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:07.002 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:07.002 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:07.260 [2024-07-22 18:25:19.040105] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:18:07.260 [2024-07-22 18:25:19.040268] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:07.260 [2024-07-22 18:25:19.211080] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:07.519 [2024-07-22 18:25:19.512204] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:07.519 [2024-07-22 18:25:19.512288] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:07.519 [2024-07-22 18:25:19.512322] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:07.519 [2024-07-22 18:25:19.512340] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:07.519 [2024-07-22 18:25:19.512354] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:07.519 [2024-07-22 18:25:19.512633] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:07.519 [2024-07-22 18:25:19.513395] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:07.519 [2024-07-22 18:25:19.513543] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:18:07.519 [2024-07-22 18:25:19.513654] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:08.085 18:25:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:08.085 18:25:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@862 -- # return 0 00:18:08.085 18:25:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:08.085 18:25:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:08.085 18:25:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:08.085 18:25:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:08.085 18:25:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:18:08.085 18:25:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:08.085 18:25:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:08.085 18:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:08.085 18:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:18:08.085 "poll_groups": [ 00:18:08.085 { 00:18:08.085 "admin_qpairs": 0, 00:18:08.085 "completed_nvme_io": 0, 00:18:08.085 "current_admin_qpairs": 0, 00:18:08.085 "current_io_qpairs": 0, 00:18:08.085 "io_qpairs": 0, 00:18:08.085 "name": "nvmf_tgt_poll_group_000", 00:18:08.085 "pending_bdev_io": 0, 00:18:08.085 "transports": [] 00:18:08.085 }, 00:18:08.085 { 00:18:08.085 "admin_qpairs": 0, 00:18:08.085 "completed_nvme_io": 0, 00:18:08.085 "current_admin_qpairs": 0, 00:18:08.085 "current_io_qpairs": 0, 00:18:08.085 "io_qpairs": 0, 00:18:08.085 "name": "nvmf_tgt_poll_group_001", 00:18:08.085 "pending_bdev_io": 0, 00:18:08.085 "transports": [] 00:18:08.085 }, 00:18:08.085 { 00:18:08.085 "admin_qpairs": 0, 00:18:08.085 "completed_nvme_io": 0, 00:18:08.085 "current_admin_qpairs": 0, 00:18:08.085 "current_io_qpairs": 0, 00:18:08.085 "io_qpairs": 0, 00:18:08.085 "name": "nvmf_tgt_poll_group_002", 00:18:08.085 "pending_bdev_io": 0, 00:18:08.085 "transports": [] 00:18:08.085 }, 00:18:08.085 { 00:18:08.085 "admin_qpairs": 0, 00:18:08.085 "completed_nvme_io": 0, 00:18:08.085 "current_admin_qpairs": 0, 00:18:08.085 "current_io_qpairs": 0, 00:18:08.085 "io_qpairs": 0, 00:18:08.085 "name": "nvmf_tgt_poll_group_003", 00:18:08.085 "pending_bdev_io": 0, 00:18:08.085 "transports": [] 00:18:08.085 } 00:18:08.085 ], 00:18:08.085 "tick_rate": 2200000000 00:18:08.085 }' 00:18:08.085 18:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:18:08.085 18:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:18:08.085 18:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:18:08.085 18:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:18:08.085 18:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:18:08.085 18:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:18:08.343 18:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:18:08.343 18:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:08.343 18:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:08.343 18:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:08.343 [2024-07-22 18:25:20.112274] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:08.343 18:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:08.343 18:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:18:08.343 18:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:08.343 18:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:08.343 18:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:08.343 18:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:18:08.343 "poll_groups": [ 00:18:08.343 { 00:18:08.343 "admin_qpairs": 0, 00:18:08.343 "completed_nvme_io": 0, 00:18:08.343 "current_admin_qpairs": 0, 00:18:08.343 "current_io_qpairs": 0, 00:18:08.343 "io_qpairs": 0, 00:18:08.343 "name": "nvmf_tgt_poll_group_000", 00:18:08.343 "pending_bdev_io": 0, 00:18:08.343 "transports": [ 00:18:08.343 { 00:18:08.343 "trtype": "TCP" 00:18:08.343 } 00:18:08.343 ] 00:18:08.343 }, 00:18:08.343 { 00:18:08.343 "admin_qpairs": 0, 00:18:08.343 "completed_nvme_io": 0, 00:18:08.343 "current_admin_qpairs": 0, 00:18:08.343 "current_io_qpairs": 0, 00:18:08.343 "io_qpairs": 0, 00:18:08.343 "name": "nvmf_tgt_poll_group_001", 00:18:08.343 "pending_bdev_io": 0, 00:18:08.343 "transports": [ 00:18:08.343 { 00:18:08.343 "trtype": "TCP" 00:18:08.343 } 00:18:08.343 ] 00:18:08.343 }, 00:18:08.343 { 00:18:08.343 "admin_qpairs": 0, 00:18:08.343 "completed_nvme_io": 0, 00:18:08.343 "current_admin_qpairs": 0, 00:18:08.343 "current_io_qpairs": 0, 00:18:08.343 "io_qpairs": 0, 00:18:08.343 "name": "nvmf_tgt_poll_group_002", 00:18:08.343 "pending_bdev_io": 0, 00:18:08.343 "transports": [ 00:18:08.343 { 00:18:08.343 "trtype": "TCP" 00:18:08.343 } 00:18:08.343 ] 00:18:08.343 }, 00:18:08.343 { 00:18:08.343 "admin_qpairs": 0, 00:18:08.343 "completed_nvme_io": 0, 00:18:08.343 "current_admin_qpairs": 0, 00:18:08.343 "current_io_qpairs": 0, 00:18:08.343 "io_qpairs": 0, 00:18:08.343 "name": "nvmf_tgt_poll_group_003", 00:18:08.343 "pending_bdev_io": 0, 00:18:08.343 "transports": [ 00:18:08.343 { 00:18:08.343 "trtype": "TCP" 00:18:08.343 } 00:18:08.343 ] 00:18:08.343 } 00:18:08.343 ], 00:18:08.343 "tick_rate": 2200000000 00:18:08.343 }' 00:18:08.343 18:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:18:08.343 18:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:18:08.343 18:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:18:08.344 18:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:18:08.344 18:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:18:08.344 18:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:18:08.344 18:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:18:08.344 18:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:18:08.344 18:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:18:08.344 18:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:18:08.344 18:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:18:08.344 18:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:18:08.344 18:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:18:08.344 18:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:18:08.344 18:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:08.344 18:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:08.344 Malloc1 00:18:08.344 18:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:08.344 18:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:08.344 18:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:08.344 18:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:08.344 18:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:08.344 18:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:08.344 18:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:08.344 18:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:08.344 18:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:08.344 18:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:18:08.344 18:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:08.344 18:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:08.344 18:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:08.344 18:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:08.344 18:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:08.344 18:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:08.344 [2024-07-22 18:25:20.351662] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:08.344 18:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:08.344 18:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --hostid=0b8484e2-e129-4a11-8748-0b3c728771da -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -a 10.0.0.2 -s 4420 00:18:08.344 18:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:18:08.344 18:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --hostid=0b8484e2-e129-4a11-8748-0b3c728771da -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -a 10.0.0.2 -s 4420 00:18:08.344 18:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:18:08.344 18:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:08.344 18:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:18:08.602 18:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:08.602 18:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:18:08.602 18:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:08.602 18:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:18:08.602 18:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:18:08.602 18:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --hostid=0b8484e2-e129-4a11-8748-0b3c728771da -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -a 10.0.0.2 -s 4420 00:18:08.602 [2024-07-22 18:25:20.380717] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da' 00:18:08.602 Failed to write to /dev/nvme-fabrics: Input/output error 00:18:08.602 could not add new controller: failed to write to nvme-fabrics device 00:18:08.602 18:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:18:08.602 18:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:08.602 18:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:08.602 18:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:08.602 18:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:18:08.602 18:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:08.602 18:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:08.602 18:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:08.602 18:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --hostid=0b8484e2-e129-4a11-8748-0b3c728771da -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:08.602 18:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:18:08.602 18:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:18:08.602 18:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:18:08.602 18:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:18:08.602 18:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:18:11.134 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:18:11.134 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:18:11.134 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:18:11.134 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:18:11.134 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:18:11.134 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:18:11.134 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:11.134 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:11.134 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:11.134 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:18:11.134 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:18:11.134 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:11.134 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:18:11.134 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:11.134 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:18:11.134 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:18:11.134 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.134 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:11.134 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.134 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --hostid=0b8484e2-e129-4a11-8748-0b3c728771da -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:11.134 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:18:11.134 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --hostid=0b8484e2-e129-4a11-8748-0b3c728771da -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:11.134 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:18:11.134 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:11.134 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:18:11.134 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:11.134 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:18:11.134 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:11.134 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:18:11.134 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:18:11.134 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --hostid=0b8484e2-e129-4a11-8748-0b3c728771da -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:11.134 [2024-07-22 18:25:22.793145] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da' 00:18:11.134 Failed to write to /dev/nvme-fabrics: Input/output error 00:18:11.134 could not add new controller: failed to write to nvme-fabrics device 00:18:11.134 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:18:11.134 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:11.134 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:11.134 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:11.134 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:18:11.134 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.134 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:11.134 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.134 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --hostid=0b8484e2-e129-4a11-8748-0b3c728771da -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:11.134 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:18:11.134 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:18:11.134 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:18:11.134 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:18:11.134 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:18:13.034 18:25:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:18:13.035 18:25:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:18:13.035 18:25:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:18:13.035 18:25:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:18:13.035 18:25:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:18:13.035 18:25:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:18:13.035 18:25:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:13.035 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:13.035 18:25:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:13.035 18:25:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:18:13.035 18:25:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:13.035 18:25:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:18:13.035 18:25:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:13.035 18:25:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:18:13.292 18:25:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:18:13.292 18:25:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:13.292 18:25:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.292 18:25:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:13.292 18:25:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.292 18:25:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:18:13.292 18:25:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:18:13.292 18:25:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:13.292 18:25:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.292 18:25:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:13.292 18:25:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.292 18:25:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:13.292 18:25:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.292 18:25:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:13.292 [2024-07-22 18:25:25.098786] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:13.292 18:25:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.292 18:25:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:18:13.292 18:25:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.292 18:25:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:13.292 18:25:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.292 18:25:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:13.292 18:25:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.292 18:25:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:13.292 18:25:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.292 18:25:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --hostid=0b8484e2-e129-4a11-8748-0b3c728771da -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:13.292 18:25:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:18:13.292 18:25:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:18:13.292 18:25:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:18:13.292 18:25:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:18:13.292 18:25:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:18:15.818 18:25:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:18:15.818 18:25:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:18:15.818 18:25:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:18:15.818 18:25:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:18:15.818 18:25:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:18:15.818 18:25:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:18:15.818 18:25:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:15.818 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:15.818 18:25:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:15.818 18:25:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:18:15.818 18:25:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:18:15.818 18:25:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:15.818 18:25:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:18:15.818 18:25:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:15.818 18:25:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:18:15.818 18:25:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:18:15.818 18:25:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.818 18:25:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:15.818 18:25:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.818 18:25:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:15.818 18:25:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.818 18:25:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:15.818 18:25:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.818 18:25:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:18:15.818 18:25:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:15.818 18:25:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.818 18:25:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:15.818 18:25:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.818 18:25:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:15.818 18:25:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.818 18:25:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:15.818 [2024-07-22 18:25:27.413090] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:15.818 18:25:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.818 18:25:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:18:15.818 18:25:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.818 18:25:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:15.818 18:25:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.818 18:25:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:15.818 18:25:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.818 18:25:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:15.818 18:25:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.819 18:25:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --hostid=0b8484e2-e129-4a11-8748-0b3c728771da -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:15.819 18:25:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:18:15.819 18:25:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:18:15.819 18:25:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:18:15.819 18:25:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:18:15.819 18:25:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:18:17.715 18:25:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:18:17.715 18:25:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:18:17.715 18:25:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:18:17.715 18:25:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:18:17.715 18:25:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:18:17.715 18:25:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:18:17.715 18:25:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:17.973 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:17.973 18:25:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:17.973 18:25:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:18:17.973 18:25:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:18:17.973 18:25:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:17.973 18:25:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:18:17.973 18:25:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:17.973 18:25:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:18:17.973 18:25:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:18:17.973 18:25:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.973 18:25:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:17.973 18:25:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.973 18:25:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:17.973 18:25:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.973 18:25:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:17.973 18:25:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.973 18:25:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:18:17.973 18:25:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:17.973 18:25:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.973 18:25:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:17.973 18:25:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.973 18:25:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:17.973 18:25:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.974 18:25:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:17.974 [2024-07-22 18:25:29.828083] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:17.974 18:25:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.974 18:25:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:18:17.974 18:25:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.974 18:25:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:17.974 18:25:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.974 18:25:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:17.974 18:25:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.974 18:25:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:17.974 18:25:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.974 18:25:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --hostid=0b8484e2-e129-4a11-8748-0b3c728771da -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:18.232 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:18:18.232 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:18:18.232 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:18:18.232 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:18:18.232 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:18:20.131 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:18:20.131 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:18:20.131 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:18:20.131 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:18:20.131 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:18:20.131 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:18:20.131 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:20.131 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:20.131 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:20.131 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:18:20.131 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:18:20.131 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:20.131 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:18:20.131 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:20.131 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:18:20.131 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:18:20.131 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:20.131 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:20.131 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:20.131 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:20.131 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:20.131 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:20.131 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:20.131 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:18:20.131 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:20.131 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:20.131 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:20.131 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:20.131 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:20.131 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:20.131 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:20.390 [2024-07-22 18:25:32.154051] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:20.390 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:20.390 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:18:20.390 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:20.390 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:20.390 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:20.390 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:20.390 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:20.390 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:20.390 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:20.390 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --hostid=0b8484e2-e129-4a11-8748-0b3c728771da -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:20.390 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:18:20.390 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:18:20.390 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:18:20.390 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:18:20.390 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:18:22.954 18:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:18:22.955 18:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:18:22.955 18:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:18:22.955 18:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:18:22.955 18:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:18:22.955 18:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:18:22.955 18:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:22.955 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:22.955 18:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:22.955 18:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:18:22.955 18:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:18:22.955 18:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:22.955 18:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:18:22.955 18:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:22.955 18:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:18:22.955 18:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:18:22.955 18:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:22.955 18:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:22.955 18:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:22.955 18:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:22.955 18:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:22.955 18:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:22.955 18:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:22.955 18:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:18:22.955 18:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:22.955 18:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:22.955 18:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:22.955 18:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:22.955 18:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:22.955 18:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:22.955 18:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:22.955 [2024-07-22 18:25:34.567444] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:22.955 18:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:22.955 18:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:18:22.955 18:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:22.955 18:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:22.955 18:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:22.955 18:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:22.955 18:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:22.955 18:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:22.955 18:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:22.955 18:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --hostid=0b8484e2-e129-4a11-8748-0b3c728771da -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:22.955 18:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:18:22.955 18:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:18:22.955 18:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:18:22.955 18:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:18:22.955 18:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:18:24.860 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:18:24.860 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:18:24.860 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:18:24.860 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:18:24.860 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:18:24.860 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:18:24.860 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:24.860 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:24.860 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:24.860 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:18:24.860 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:18:24.860 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:24.860 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:18:24.860 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:24.860 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:18:24.860 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:18:24.860 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.860 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:24.861 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.861 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:24.861 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.861 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:24.861 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.861 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:18:24.861 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:18:24.861 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:24.861 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.861 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:25.132 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.132 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:25.132 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.132 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:25.132 [2024-07-22 18:25:36.886927] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:25.132 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.132 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:25.132 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.132 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:25.132 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.132 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:25.132 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.132 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:25.132 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.132 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:25.132 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.132 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:25.132 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.132 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:25.132 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.132 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:25.132 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.132 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:18:25.132 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:25.132 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.132 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:25.132 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.132 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:25.132 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.132 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:25.132 [2024-07-22 18:25:36.934965] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:25.132 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.132 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:25.132 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.132 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:25.132 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.132 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:25.132 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.132 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:25.132 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.132 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:25.132 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.132 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:25.132 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.132 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:25.132 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.132 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:25.132 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.132 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:18:25.132 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:25.132 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.132 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:25.132 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.132 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:25.132 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.132 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:25.132 [2024-07-22 18:25:36.983032] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:25.132 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.132 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:25.132 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.132 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:25.132 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.132 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:25.132 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.132 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:25.132 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.132 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:25.132 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.132 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:25.132 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.132 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:25.132 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.132 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:25.132 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.132 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:18:25.132 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:25.132 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.132 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:25.132 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.132 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:25.132 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.132 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:25.132 [2024-07-22 18:25:37.031112] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:25.132 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.132 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:25.132 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.132 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:25.132 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.132 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:25.133 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.133 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:25.133 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.133 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:25.133 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.133 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:25.133 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.133 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:25.133 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.133 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:25.133 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.133 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:18:25.133 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:25.133 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.133 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:25.133 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.133 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:25.133 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.133 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:25.133 [2024-07-22 18:25:37.083162] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:25.133 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.133 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:25.133 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.133 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:25.133 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.133 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:25.133 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.133 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:25.133 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.133 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:25.133 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.133 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:25.133 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.133 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:25.133 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.133 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:25.133 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.133 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:18:25.133 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.133 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:25.133 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.133 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:18:25.133 "poll_groups": [ 00:18:25.133 { 00:18:25.133 "admin_qpairs": 2, 00:18:25.133 "completed_nvme_io": 69, 00:18:25.133 "current_admin_qpairs": 0, 00:18:25.133 "current_io_qpairs": 0, 00:18:25.133 "io_qpairs": 16, 00:18:25.133 "name": "nvmf_tgt_poll_group_000", 00:18:25.133 "pending_bdev_io": 0, 00:18:25.133 "transports": [ 00:18:25.133 { 00:18:25.133 "trtype": "TCP" 00:18:25.133 } 00:18:25.133 ] 00:18:25.133 }, 00:18:25.133 { 00:18:25.133 "admin_qpairs": 3, 00:18:25.133 "completed_nvme_io": 115, 00:18:25.133 "current_admin_qpairs": 0, 00:18:25.133 "current_io_qpairs": 0, 00:18:25.133 "io_qpairs": 17, 00:18:25.133 "name": "nvmf_tgt_poll_group_001", 00:18:25.133 "pending_bdev_io": 0, 00:18:25.133 "transports": [ 00:18:25.133 { 00:18:25.133 "trtype": "TCP" 00:18:25.133 } 00:18:25.133 ] 00:18:25.133 }, 00:18:25.133 { 00:18:25.133 "admin_qpairs": 1, 00:18:25.133 "completed_nvme_io": 167, 00:18:25.133 "current_admin_qpairs": 0, 00:18:25.133 "current_io_qpairs": 0, 00:18:25.133 "io_qpairs": 19, 00:18:25.133 "name": "nvmf_tgt_poll_group_002", 00:18:25.133 "pending_bdev_io": 0, 00:18:25.133 "transports": [ 00:18:25.133 { 00:18:25.133 "trtype": "TCP" 00:18:25.133 } 00:18:25.133 ] 00:18:25.133 }, 00:18:25.133 { 00:18:25.133 "admin_qpairs": 1, 00:18:25.133 "completed_nvme_io": 69, 00:18:25.133 "current_admin_qpairs": 0, 00:18:25.133 "current_io_qpairs": 0, 00:18:25.133 "io_qpairs": 18, 00:18:25.133 "name": "nvmf_tgt_poll_group_003", 00:18:25.133 "pending_bdev_io": 0, 00:18:25.133 "transports": [ 00:18:25.133 { 00:18:25.133 "trtype": "TCP" 00:18:25.133 } 00:18:25.133 ] 00:18:25.133 } 00:18:25.133 ], 00:18:25.133 "tick_rate": 2200000000 00:18:25.133 }' 00:18:25.133 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:18:25.133 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:18:25.392 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:18:25.392 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:18:25.392 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:18:25.392 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:18:25.392 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:18:25.392 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:18:25.392 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:18:25.392 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 70 > 0 )) 00:18:25.392 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:18:25.392 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:18:25.392 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:18:25.392 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:25.392 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:18:25.392 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:25.392 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:18:25.392 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:25.392 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:25.392 rmmod nvme_tcp 00:18:25.392 rmmod nvme_fabrics 00:18:25.392 rmmod nvme_keyring 00:18:25.392 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:25.392 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:18:25.392 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:18:25.392 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 81692 ']' 00:18:25.392 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 81692 00:18:25.392 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@948 -- # '[' -z 81692 ']' 00:18:25.392 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@952 -- # kill -0 81692 00:18:25.392 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@953 -- # uname 00:18:25.393 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:25.393 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 81692 00:18:25.393 killing process with pid 81692 00:18:25.393 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:25.393 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:25.393 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 81692' 00:18:25.393 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@967 -- # kill 81692 00:18:25.393 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # wait 81692 00:18:26.770 18:25:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:26.770 18:25:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:26.770 18:25:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:26.770 18:25:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:26.770 18:25:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:26.770 18:25:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:26.770 18:25:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:26.770 18:25:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:27.029 18:25:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:27.029 00:18:27.029 real 0m20.371s 00:18:27.029 user 1m14.999s 00:18:27.029 sys 0m2.287s 00:18:27.029 18:25:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:27.029 18:25:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:27.029 ************************************ 00:18:27.029 END TEST nvmf_rpc 00:18:27.029 ************************************ 00:18:27.029 18:25:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:18:27.029 18:25:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:18:27.029 18:25:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:27.029 18:25:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:27.029 18:25:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:27.029 ************************************ 00:18:27.029 START TEST nvmf_invalid 00:18:27.029 ************************************ 00:18:27.029 18:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:18:27.029 * Looking for test storage... 00:18:27.029 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:27.029 18:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:27.029 18:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:18:27.029 18:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:27.029 18:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:27.029 18:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:27.029 18:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:27.029 18:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:27.029 18:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:27.029 18:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:27.029 18:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:27.029 18:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:27.029 18:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:27.029 18:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:18:27.029 18:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=0b8484e2-e129-4a11-8748-0b3c728771da 00:18:27.029 18:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:27.029 18:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:27.029 18:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:27.029 18:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:27.029 18:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:27.029 18:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:27.029 18:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:27.029 18:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:27.029 18:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:27.029 18:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:27.029 18:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:27.029 18:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:18:27.029 18:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:27.029 18:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:18:27.029 18:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:27.029 18:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:27.029 18:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:27.029 18:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:27.029 18:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:27.029 18:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:27.029 18:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:27.029 18:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:27.029 18:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:18:27.029 18:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:27.029 18:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:18:27.029 18:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:18:27.029 18:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:18:27.029 18:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:18:27.029 18:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:27.029 18:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:27.029 18:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:27.029 18:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:27.029 18:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:27.029 18:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:27.029 18:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:27.029 18:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:27.029 18:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:27.029 18:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:27.029 18:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:27.029 18:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:27.029 18:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:27.029 18:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:27.029 18:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:27.029 18:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:27.029 18:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:27.029 18:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:27.029 18:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:27.029 18:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:27.029 18:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:27.029 18:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:27.029 18:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:27.029 18:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:27.029 18:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:27.029 18:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:27.029 18:25:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:27.029 18:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:27.029 Cannot find device "nvmf_tgt_br" 00:18:27.030 18:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@155 -- # true 00:18:27.030 18:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:27.030 Cannot find device "nvmf_tgt_br2" 00:18:27.030 18:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@156 -- # true 00:18:27.030 18:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:27.030 18:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:27.030 Cannot find device "nvmf_tgt_br" 00:18:27.030 18:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@158 -- # true 00:18:27.030 18:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:27.288 Cannot find device "nvmf_tgt_br2" 00:18:27.288 18:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@159 -- # true 00:18:27.288 18:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:27.288 18:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:27.288 18:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:27.288 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:27.288 18:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@162 -- # true 00:18:27.288 18:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:27.288 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:27.288 18:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@163 -- # true 00:18:27.288 18:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:27.288 18:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:27.288 18:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:27.288 18:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:27.288 18:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:27.288 18:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:27.288 18:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:27.288 18:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:27.288 18:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:27.288 18:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:27.288 18:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:27.288 18:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:27.288 18:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:27.288 18:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:27.288 18:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:27.288 18:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:27.288 18:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:27.288 18:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:27.288 18:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:27.288 18:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:27.288 18:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:27.289 18:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:27.547 18:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:27.547 18:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:27.547 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:27.547 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.100 ms 00:18:27.547 00:18:27.547 --- 10.0.0.2 ping statistics --- 00:18:27.547 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:27.547 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:18:27.547 18:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:27.547 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:27.547 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.094 ms 00:18:27.547 00:18:27.547 --- 10.0.0.3 ping statistics --- 00:18:27.547 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:27.547 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:18:27.547 18:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:27.547 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:27.547 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:18:27.547 00:18:27.547 --- 10.0.0.1 ping statistics --- 00:18:27.547 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:27.547 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:18:27.547 18:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:27.547 18:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@433 -- # return 0 00:18:27.547 18:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:27.547 18:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:27.547 18:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:27.547 18:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:27.547 18:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:27.547 18:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:27.547 18:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:27.547 18:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:18:27.547 18:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:27.547 18:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:27.547 18:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:18:27.547 18:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=82212 00:18:27.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:27.547 18:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 82212 00:18:27.547 18:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@829 -- # '[' -z 82212 ']' 00:18:27.547 18:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:27.548 18:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:27.548 18:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:27.548 18:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:27.548 18:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:27.548 18:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:18:27.548 [2024-07-22 18:25:39.482104] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:18:27.548 [2024-07-22 18:25:39.482303] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:27.806 [2024-07-22 18:25:39.663146] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:28.065 [2024-07-22 18:25:39.967353] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:28.065 [2024-07-22 18:25:39.967468] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:28.065 [2024-07-22 18:25:39.967493] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:28.065 [2024-07-22 18:25:39.967509] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:28.065 [2024-07-22 18:25:39.967522] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:28.065 [2024-07-22 18:25:39.967944] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:28.065 [2024-07-22 18:25:39.968314] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:28.065 [2024-07-22 18:25:39.968492] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:28.065 [2024-07-22 18:25:39.968500] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:18:28.632 18:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:28.632 18:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@862 -- # return 0 00:18:28.632 18:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:28.632 18:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:28.632 18:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:18:28.632 18:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:28.632 18:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:18:28.632 18:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode500 00:18:28.890 [2024-07-22 18:25:40.867170] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:18:28.890 18:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='2024/07/22 18:25:40 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode500 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:18:28.890 request: 00:18:28.890 { 00:18:28.890 "method": "nvmf_create_subsystem", 00:18:28.890 "params": { 00:18:28.890 "nqn": "nqn.2016-06.io.spdk:cnode500", 00:18:28.890 "tgt_name": "foobar" 00:18:28.890 } 00:18:28.890 } 00:18:28.890 Got JSON-RPC error response 00:18:28.890 GoRPCClient: error on JSON-RPC call' 00:18:28.890 18:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ 2024/07/22 18:25:40 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode500 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:18:28.890 request: 00:18:28.890 { 00:18:28.890 "method": "nvmf_create_subsystem", 00:18:28.890 "params": { 00:18:28.890 "nqn": "nqn.2016-06.io.spdk:cnode500", 00:18:28.890 "tgt_name": "foobar" 00:18:28.890 } 00:18:28.890 } 00:18:28.890 Got JSON-RPC error response 00:18:28.890 GoRPCClient: error on JSON-RPC call == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:18:28.890 18:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:18:28.890 18:25:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode29949 00:18:29.148 [2024-07-22 18:25:41.163647] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29949: invalid serial number 'SPDKISFASTANDAWESOME' 00:18:29.406 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='2024/07/22 18:25:41 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode29949 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:18:29.406 request: 00:18:29.406 { 00:18:29.406 "method": "nvmf_create_subsystem", 00:18:29.406 "params": { 00:18:29.406 "nqn": "nqn.2016-06.io.spdk:cnode29949", 00:18:29.406 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:18:29.406 } 00:18:29.406 } 00:18:29.406 Got JSON-RPC error response 00:18:29.406 GoRPCClient: error on JSON-RPC call' 00:18:29.406 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ 2024/07/22 18:25:41 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode29949 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:18:29.406 request: 00:18:29.406 { 00:18:29.406 "method": "nvmf_create_subsystem", 00:18:29.406 "params": { 00:18:29.406 "nqn": "nqn.2016-06.io.spdk:cnode29949", 00:18:29.406 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:18:29.406 } 00:18:29.406 } 00:18:29.406 Got JSON-RPC error response 00:18:29.406 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:18:29.406 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:18:29.406 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode25429 00:18:29.406 [2024-07-22 18:25:41.420133] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25429: invalid model number 'SPDK_Controller' 00:18:29.664 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='2024/07/22 18:25:41 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode25429], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:18:29.664 request: 00:18:29.664 { 00:18:29.664 "method": "nvmf_create_subsystem", 00:18:29.664 "params": { 00:18:29.664 "nqn": "nqn.2016-06.io.spdk:cnode25429", 00:18:29.664 "model_number": "SPDK_Controller\u001f" 00:18:29.664 } 00:18:29.664 } 00:18:29.664 Got JSON-RPC error response 00:18:29.664 GoRPCClient: error on JSON-RPC call' 00:18:29.664 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ 2024/07/22 18:25:41 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode25429], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:18:29.664 request: 00:18:29.664 { 00:18:29.664 "method": "nvmf_create_subsystem", 00:18:29.664 "params": { 00:18:29.664 "nqn": "nqn.2016-06.io.spdk:cnode25429", 00:18:29.664 "model_number": "SPDK_Controller\u001f" 00:18:29.664 } 00:18:29.664 } 00:18:29.664 Got JSON-RPC error response 00:18:29.664 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:18:29.664 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:18:29.664 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:18:29.664 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:18:29.664 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:18:29.664 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:18:29.664 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:18:29.664 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.664 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:18:29.664 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:18:29.664 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:18:29.664 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.664 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.664 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:18:29.664 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:18:29.664 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:18:29.664 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.664 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.664 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:18:29.664 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:18:29.664 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:18:29.664 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.664 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.664 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:18:29.664 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:18:29.664 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:18:29.664 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.664 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.664 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:18:29.664 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:18:29.664 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:18:29.664 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.664 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.664 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:18:29.664 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:18:29.664 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:18:29.664 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.664 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.664 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:18:29.664 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:18:29.664 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:18:29.664 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.664 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.664 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:18:29.664 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:18:29.664 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:18:29.664 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.664 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.664 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:18:29.664 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:18:29.664 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:18:29.664 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.664 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.664 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:18:29.664 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:18:29.664 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:18:29.664 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.664 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.664 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:18:29.664 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:18:29.664 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:18:29.664 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.664 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.664 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:18:29.664 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:18:29.665 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:18:29.665 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.665 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.665 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:18:29.665 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:18:29.665 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:18:29.665 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.665 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.665 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:18:29.665 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:18:29.665 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:18:29.665 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.665 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.665 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:18:29.665 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:18:29.665 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:18:29.665 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.665 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.665 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:18:29.665 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:18:29.665 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:18:29.665 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.665 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.665 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:18:29.665 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:18:29.665 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:18:29.665 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.665 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.665 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:18:29.665 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:18:29.665 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:18:29.665 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.665 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.665 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:18:29.665 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:18:29.665 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:18:29.665 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.665 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.665 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:18:29.665 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:18:29.665 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:18:29.665 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.665 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.665 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:18:29.665 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:18:29.665 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:18:29.665 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.665 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.665 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ 3 == \- ]] 00:18:29.665 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '3kX h|UG~/mRZ*H\SJ\/Q' 00:18:29.665 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s '3kX h|UG~/mRZ*H\SJ\/Q' nqn.2016-06.io.spdk:cnode31758 00:18:29.942 [2024-07-22 18:25:41.776659] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31758: invalid serial number '3kX h|UG~/mRZ*H\SJ\/Q' 00:18:29.942 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='2024/07/22 18:25:41 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode31758 serial_number:3kX h|UG~/mRZ*H\SJ\/Q], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN 3kX h|UG~/mRZ*H\SJ\/Q 00:18:29.942 request: 00:18:29.942 { 00:18:29.942 "method": "nvmf_create_subsystem", 00:18:29.943 "params": { 00:18:29.943 "nqn": "nqn.2016-06.io.spdk:cnode31758", 00:18:29.943 "serial_number": "3kX h|UG~/mRZ*H\\SJ\\/Q" 00:18:29.943 } 00:18:29.943 } 00:18:29.943 Got JSON-RPC error response 00:18:29.943 GoRPCClient: error on JSON-RPC call' 00:18:29.943 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ 2024/07/22 18:25:41 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode31758 serial_number:3kX h|UG~/mRZ*H\SJ\/Q], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN 3kX h|UG~/mRZ*H\SJ\/Q 00:18:29.943 request: 00:18:29.943 { 00:18:29.943 "method": "nvmf_create_subsystem", 00:18:29.943 "params": { 00:18:29.943 "nqn": "nqn.2016-06.io.spdk:cnode31758", 00:18:29.943 "serial_number": "3kX h|UG~/mRZ*H\\SJ\\/Q" 00:18:29.943 } 00:18:29.943 } 00:18:29.943 Got JSON-RPC error response 00:18:29.943 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:18:29.943 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:18:29.943 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:18:29.943 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:18:29.943 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:18:29.943 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:18:29.943 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:18:29.943 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.943 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:18:29.943 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:18:29.943 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:18:29.943 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.943 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.943 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:18:29.943 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:18:29.943 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:18:29.943 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.943 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.943 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:18:29.943 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:18:29.943 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:18:29.943 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.943 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.943 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:18:29.943 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:18:29.943 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:18:29.943 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.943 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.943 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:18:29.943 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:18:29.943 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:18:29.943 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.943 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.943 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:18:29.943 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:18:29.943 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:18:29.943 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.943 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.943 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:18:29.943 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:18:29.943 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:18:29.943 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.943 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.943 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:18:29.943 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:18:29.943 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:18:29.943 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.943 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.943 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:18:29.943 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:18:29.943 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:18:29.943 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.943 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.943 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:18:29.943 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:18:29.943 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:18:29.943 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.943 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.943 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:18:29.943 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:18:29.943 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:18:29.943 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.943 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.943 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:18:29.943 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:18:29.943 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:18:29.943 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.943 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.943 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:18:29.943 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:18:29.943 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:18:29.943 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.943 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.943 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:18:29.943 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:18:29.943 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:18:29.943 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.943 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.943 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:18:29.943 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:18:29.943 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:18:29.943 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.943 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.943 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:18:29.943 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:18:29.943 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:18:29.943 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.943 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.943 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:18:29.943 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:18:29.943 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:18:29.943 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.943 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.943 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:18:29.943 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:18:29.943 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:18:29.943 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.943 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.943 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:18:29.943 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:18:29.943 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:18:29.943 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.943 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.943 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:18:29.943 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:18:29.944 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:18:29.944 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.944 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.944 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:18:29.944 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:18:29.944 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:18:29.944 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.944 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.944 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:18:29.944 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:18:29.944 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:18:29.944 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.944 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.944 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:18:29.944 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:18:29.944 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:18:29.944 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.944 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.944 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:18:29.944 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:18:29.944 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:18:29.944 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.944 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.944 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:18:29.944 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:18:29.944 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:18:29.944 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.944 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.944 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:18:29.944 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:18:29.944 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:18:29.944 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.944 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.944 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:18:29.944 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:18:29.944 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:18:29.944 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.944 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.944 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:18:29.944 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:18:29.944 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:18:29.944 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.944 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.944 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:18:29.944 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:18:29.944 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:18:29.944 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.944 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:30.202 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:18:30.202 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:18:30.202 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:18:30.202 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:30.202 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:30.202 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:18:30.202 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:18:30.202 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:18:30.202 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:30.202 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:30.202 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:18:30.202 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:18:30.202 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:18:30.202 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:30.203 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:30.203 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:18:30.203 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:18:30.203 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:18:30.203 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:30.203 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:30.203 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:18:30.203 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:18:30.203 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:18:30.203 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:30.203 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:30.203 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:18:30.203 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:18:30.203 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:18:30.203 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:30.203 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:30.203 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:18:30.203 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:18:30.203 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:18:30.203 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:30.203 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:30.203 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:18:30.203 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:18:30.203 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:18:30.203 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:30.203 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:30.203 18:25:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:18:30.203 18:25:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:18:30.203 18:25:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:18:30.203 18:25:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:30.203 18:25:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:30.203 18:25:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:18:30.203 18:25:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:18:30.203 18:25:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:18:30.203 18:25:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:30.203 18:25:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:30.203 18:25:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:18:30.203 18:25:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:18:30.203 18:25:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:18:30.203 18:25:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:30.203 18:25:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:30.203 18:25:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:18:30.203 18:25:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:18:30.203 18:25:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:18:30.203 18:25:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:30.203 18:25:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:30.203 18:25:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ ~ == \- ]] 00:18:30.203 18:25:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '~ CQLS@8#zw!HD.02s.7J*Du,@W}6~N"l}h0_,7g`' 00:18:30.203 18:25:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d '~ CQLS@8#zw!HD.02s.7J*Du,@W}6~N"l}h0_,7g`' nqn.2016-06.io.spdk:cnode9876 00:18:30.461 [2024-07-22 18:25:42.317591] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9876: invalid model number '~ CQLS@8#zw!HD.02s.7J*Du,@W}6~N"l}h0_,7g`' 00:18:30.461 18:25:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='2024/07/22 18:25:42 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:~ CQLS@8#zw!HD.02s.7J*Du,@W}6~N"l}h0_,7g` nqn:nqn.2016-06.io.spdk:cnode9876], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN ~ CQLS@8#zw!HD.02s.7J*Du,@W}6~N"l}h0_,7g` 00:18:30.461 request: 00:18:30.461 { 00:18:30.461 "method": "nvmf_create_subsystem", 00:18:30.461 "params": { 00:18:30.461 "nqn": "nqn.2016-06.io.spdk:cnode9876", 00:18:30.461 "model_number": "~ CQLS@8#zw!HD.02s.7J*Du,@W}6~N\"l}h0_,7g`" 00:18:30.461 } 00:18:30.461 } 00:18:30.461 Got JSON-RPC error response 00:18:30.461 GoRPCClient: error on JSON-RPC call' 00:18:30.461 18:25:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ 2024/07/22 18:25:42 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:~ CQLS@8#zw!HD.02s.7J*Du,@W}6~N"l}h0_,7g` nqn:nqn.2016-06.io.spdk:cnode9876], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN ~ CQLS@8#zw!HD.02s.7J*Du,@W}6~N"l}h0_,7g` 00:18:30.461 request: 00:18:30.461 { 00:18:30.461 "method": "nvmf_create_subsystem", 00:18:30.461 "params": { 00:18:30.461 "nqn": "nqn.2016-06.io.spdk:cnode9876", 00:18:30.461 "model_number": "~ CQLS@8#zw!HD.02s.7J*Du,@W}6~N\"l}h0_,7g`" 00:18:30.461 } 00:18:30.461 } 00:18:30.461 Got JSON-RPC error response 00:18:30.461 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:18:30.461 18:25:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:18:30.719 [2024-07-22 18:25:42.610142] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:30.719 18:25:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:18:30.976 18:25:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:18:30.976 18:25:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:18:30.976 18:25:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:18:30.976 18:25:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:18:30.976 18:25:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:18:31.234 [2024-07-22 18:25:43.223900] nvmf_rpc.c: 809:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:18:31.234 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='2024/07/22 18:25:43 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 00:18:31.234 request: 00:18:31.234 { 00:18:31.234 "method": "nvmf_subsystem_remove_listener", 00:18:31.234 "params": { 00:18:31.234 "nqn": "nqn.2016-06.io.spdk:cnode", 00:18:31.234 "listen_address": { 00:18:31.234 "trtype": "tcp", 00:18:31.234 "traddr": "", 00:18:31.234 "trsvcid": "4421" 00:18:31.234 } 00:18:31.234 } 00:18:31.234 } 00:18:31.234 Got JSON-RPC error response 00:18:31.234 GoRPCClient: error on JSON-RPC call' 00:18:31.234 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ 2024/07/22 18:25:43 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 00:18:31.234 request: 00:18:31.234 { 00:18:31.234 "method": "nvmf_subsystem_remove_listener", 00:18:31.234 "params": { 00:18:31.234 "nqn": "nqn.2016-06.io.spdk:cnode", 00:18:31.234 "listen_address": { 00:18:31.234 "trtype": "tcp", 00:18:31.234 "traddr": "", 00:18:31.234 "trsvcid": "4421" 00:18:31.234 } 00:18:31.234 } 00:18:31.234 } 00:18:31.234 Got JSON-RPC error response 00:18:31.234 GoRPCClient: error on JSON-RPC call != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:18:31.490 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode26243 -i 0 00:18:31.748 [2024-07-22 18:25:43.540367] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26243: invalid cntlid range [0-65519] 00:18:31.748 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='2024/07/22 18:25:43 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode26243], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 00:18:31.748 request: 00:18:31.748 { 00:18:31.748 "method": "nvmf_create_subsystem", 00:18:31.748 "params": { 00:18:31.748 "nqn": "nqn.2016-06.io.spdk:cnode26243", 00:18:31.748 "min_cntlid": 0 00:18:31.748 } 00:18:31.748 } 00:18:31.748 Got JSON-RPC error response 00:18:31.748 GoRPCClient: error on JSON-RPC call' 00:18:31.748 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ 2024/07/22 18:25:43 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode26243], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 00:18:31.748 request: 00:18:31.748 { 00:18:31.748 "method": "nvmf_create_subsystem", 00:18:31.748 "params": { 00:18:31.748 "nqn": "nqn.2016-06.io.spdk:cnode26243", 00:18:31.748 "min_cntlid": 0 00:18:31.748 } 00:18:31.748 } 00:18:31.748 Got JSON-RPC error response 00:18:31.748 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:18:31.748 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode28702 -i 65520 00:18:32.006 [2024-07-22 18:25:43.852871] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode28702: invalid cntlid range [65520-65519] 00:18:32.006 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='2024/07/22 18:25:43 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode28702], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 00:18:32.006 request: 00:18:32.006 { 00:18:32.006 "method": "nvmf_create_subsystem", 00:18:32.006 "params": { 00:18:32.006 "nqn": "nqn.2016-06.io.spdk:cnode28702", 00:18:32.006 "min_cntlid": 65520 00:18:32.006 } 00:18:32.006 } 00:18:32.006 Got JSON-RPC error response 00:18:32.006 GoRPCClient: error on JSON-RPC call' 00:18:32.006 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ 2024/07/22 18:25:43 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode28702], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 00:18:32.006 request: 00:18:32.006 { 00:18:32.006 "method": "nvmf_create_subsystem", 00:18:32.006 "params": { 00:18:32.006 "nqn": "nqn.2016-06.io.spdk:cnode28702", 00:18:32.006 "min_cntlid": 65520 00:18:32.006 } 00:18:32.006 } 00:18:32.006 Got JSON-RPC error response 00:18:32.006 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:18:32.006 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2087 -I 0 00:18:32.265 [2024-07-22 18:25:44.185506] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2087: invalid cntlid range [1-0] 00:18:32.265 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='2024/07/22 18:25:44 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode2087], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 00:18:32.265 request: 00:18:32.265 { 00:18:32.265 "method": "nvmf_create_subsystem", 00:18:32.265 "params": { 00:18:32.265 "nqn": "nqn.2016-06.io.spdk:cnode2087", 00:18:32.265 "max_cntlid": 0 00:18:32.265 } 00:18:32.265 } 00:18:32.265 Got JSON-RPC error response 00:18:32.265 GoRPCClient: error on JSON-RPC call' 00:18:32.265 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ 2024/07/22 18:25:44 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode2087], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 00:18:32.265 request: 00:18:32.265 { 00:18:32.265 "method": "nvmf_create_subsystem", 00:18:32.265 "params": { 00:18:32.265 "nqn": "nqn.2016-06.io.spdk:cnode2087", 00:18:32.265 "max_cntlid": 0 00:18:32.265 } 00:18:32.265 } 00:18:32.265 Got JSON-RPC error response 00:18:32.265 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:18:32.265 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5707 -I 65520 00:18:32.523 [2024-07-22 18:25:44.490032] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5707: invalid cntlid range [1-65520] 00:18:32.523 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='2024/07/22 18:25:44 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode5707], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 00:18:32.523 request: 00:18:32.523 { 00:18:32.523 "method": "nvmf_create_subsystem", 00:18:32.523 "params": { 00:18:32.523 "nqn": "nqn.2016-06.io.spdk:cnode5707", 00:18:32.523 "max_cntlid": 65520 00:18:32.523 } 00:18:32.523 } 00:18:32.523 Got JSON-RPC error response 00:18:32.523 GoRPCClient: error on JSON-RPC call' 00:18:32.523 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ 2024/07/22 18:25:44 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode5707], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 00:18:32.523 request: 00:18:32.523 { 00:18:32.523 "method": "nvmf_create_subsystem", 00:18:32.523 "params": { 00:18:32.523 "nqn": "nqn.2016-06.io.spdk:cnode5707", 00:18:32.523 "max_cntlid": 65520 00:18:32.523 } 00:18:32.523 } 00:18:32.523 Got JSON-RPC error response 00:18:32.523 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:18:32.523 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8932 -i 6 -I 5 00:18:32.781 [2024-07-22 18:25:44.754544] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8932: invalid cntlid range [6-5] 00:18:32.781 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='2024/07/22 18:25:44 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode8932], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 00:18:32.781 request: 00:18:32.781 { 00:18:32.781 "method": "nvmf_create_subsystem", 00:18:32.781 "params": { 00:18:32.781 "nqn": "nqn.2016-06.io.spdk:cnode8932", 00:18:32.781 "min_cntlid": 6, 00:18:32.781 "max_cntlid": 5 00:18:32.781 } 00:18:32.781 } 00:18:32.781 Got JSON-RPC error response 00:18:32.781 GoRPCClient: error on JSON-RPC call' 00:18:32.781 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ 2024/07/22 18:25:44 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode8932], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 00:18:32.781 request: 00:18:32.781 { 00:18:32.781 "method": "nvmf_create_subsystem", 00:18:32.781 "params": { 00:18:32.781 "nqn": "nqn.2016-06.io.spdk:cnode8932", 00:18:32.781 "min_cntlid": 6, 00:18:32.781 "max_cntlid": 5 00:18:32.781 } 00:18:32.781 } 00:18:32.781 Got JSON-RPC error response 00:18:32.781 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:18:32.781 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:18:33.038 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:18:33.038 { 00:18:33.038 "name": "foobar", 00:18:33.038 "method": "nvmf_delete_target", 00:18:33.038 "req_id": 1 00:18:33.038 } 00:18:33.038 Got JSON-RPC error response 00:18:33.038 response: 00:18:33.038 { 00:18:33.038 "code": -32602, 00:18:33.038 "message": "The specified target doesn'\''t exist, cannot delete it." 00:18:33.038 }' 00:18:33.038 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:18:33.038 { 00:18:33.038 "name": "foobar", 00:18:33.038 "method": "nvmf_delete_target", 00:18:33.038 "req_id": 1 00:18:33.038 } 00:18:33.038 Got JSON-RPC error response 00:18:33.038 response: 00:18:33.038 { 00:18:33.038 "code": -32602, 00:18:33.038 "message": "The specified target doesn't exist, cannot delete it." 00:18:33.038 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:18:33.038 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:18:33.038 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:18:33.038 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:33.038 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:18:33.038 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:33.038 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:18:33.038 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:33.038 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:33.038 rmmod nvme_tcp 00:18:33.038 rmmod nvme_fabrics 00:18:33.038 rmmod nvme_keyring 00:18:33.038 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:33.038 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:18:33.038 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:18:33.038 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 82212 ']' 00:18:33.038 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 82212 00:18:33.038 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@948 -- # '[' -z 82212 ']' 00:18:33.038 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@952 -- # kill -0 82212 00:18:33.038 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@953 -- # uname 00:18:33.038 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:33.038 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 82212 00:18:33.038 killing process with pid 82212 00:18:33.038 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:33.038 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:33.038 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@966 -- # echo 'killing process with pid 82212' 00:18:33.038 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@967 -- # kill 82212 00:18:33.038 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # wait 82212 00:18:34.413 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:34.413 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:34.413 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:34.413 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:34.413 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:34.413 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:34.413 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:34.413 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:34.413 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:34.413 ************************************ 00:18:34.413 END TEST nvmf_invalid 00:18:34.413 ************************************ 00:18:34.413 00:18:34.413 real 0m7.440s 00:18:34.413 user 0m27.752s 00:18:34.413 sys 0m1.701s 00:18:34.413 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:34.413 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:18:34.413 18:25:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:18:34.413 18:25:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:18:34.413 18:25:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:34.413 18:25:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:34.413 18:25:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:34.413 ************************************ 00:18:34.413 START TEST nvmf_connect_stress 00:18:34.413 ************************************ 00:18:34.413 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:18:34.672 * Looking for test storage... 00:18:34.672 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:34.672 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:34.672 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:18:34.672 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:34.672 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:34.672 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:34.672 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:34.672 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:34.672 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:34.672 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:34.672 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:34.673 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:34.673 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:34.673 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:18:34.673 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=0b8484e2-e129-4a11-8748-0b3c728771da 00:18:34.673 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:34.673 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:34.673 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:34.673 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:34.673 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:34.673 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:34.673 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:34.673 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:34.673 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:34.673 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:34.673 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:34.673 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:18:34.673 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:34.673 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:18:34.673 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:34.673 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:34.673 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:34.673 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:34.673 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:34.673 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:34.673 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:34.673 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:34.673 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:18:34.673 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:34.673 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:34.673 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:34.673 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:34.673 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:34.673 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:34.673 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:34.673 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:34.673 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:34.673 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:34.673 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:34.673 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:34.673 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:34.673 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:34.673 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:34.673 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:34.673 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:34.673 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:34.673 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:34.673 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:34.673 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:34.673 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:34.673 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:34.673 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:34.673 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:34.673 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:34.673 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:34.673 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:34.673 Cannot find device "nvmf_tgt_br" 00:18:34.673 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@155 -- # true 00:18:34.673 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:34.673 Cannot find device "nvmf_tgt_br2" 00:18:34.673 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@156 -- # true 00:18:34.673 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:34.673 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:34.673 Cannot find device "nvmf_tgt_br" 00:18:34.673 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@158 -- # true 00:18:34.673 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:34.673 Cannot find device "nvmf_tgt_br2" 00:18:34.673 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@159 -- # true 00:18:34.673 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:34.673 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:34.673 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:34.673 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:34.673 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@162 -- # true 00:18:34.673 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:34.673 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:34.673 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@163 -- # true 00:18:34.673 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:34.673 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:34.673 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:34.673 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:34.673 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:34.673 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:34.932 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:34.932 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:34.932 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:34.932 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:34.932 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:34.932 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:34.932 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:34.932 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:34.932 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:34.932 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:34.932 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:34.932 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:34.932 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:34.932 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:34.932 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:34.932 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:34.932 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:34.932 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:34.932 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:34.932 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.095 ms 00:18:34.932 00:18:34.932 --- 10.0.0.2 ping statistics --- 00:18:34.932 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:34.932 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:18:34.932 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:34.932 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:34.932 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:18:34.932 00:18:34.933 --- 10.0.0.3 ping statistics --- 00:18:34.933 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:34.933 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:18:34.933 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:34.933 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:34.933 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.051 ms 00:18:34.933 00:18:34.933 --- 10.0.0.1 ping statistics --- 00:18:34.933 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:34.933 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:18:34.933 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:34.933 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@433 -- # return 0 00:18:34.933 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:34.933 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:34.933 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:34.933 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:34.933 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:34.933 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:34.933 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:34.933 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:18:34.933 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:34.933 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:34.933 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:34.933 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=82732 00:18:34.933 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 82732 00:18:34.933 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@829 -- # '[' -z 82732 ']' 00:18:34.933 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:18:34.933 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:34.933 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:34.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:34.933 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:34.933 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:34.933 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:35.191 [2024-07-22 18:25:47.022614] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:18:35.191 [2024-07-22 18:25:47.022828] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:35.191 [2024-07-22 18:25:47.207352] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:35.759 [2024-07-22 18:25:47.502113] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:35.759 [2024-07-22 18:25:47.502186] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:35.759 [2024-07-22 18:25:47.502203] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:35.759 [2024-07-22 18:25:47.502220] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:35.759 [2024-07-22 18:25:47.502231] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:35.759 [2024-07-22 18:25:47.502402] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:35.759 [2024-07-22 18:25:47.502587] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:35.759 [2024-07-22 18:25:47.502621] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:18:36.017 18:25:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:36.017 18:25:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@862 -- # return 0 00:18:36.017 18:25:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:36.017 18:25:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:36.017 18:25:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:36.017 18:25:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:36.017 18:25:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:36.017 18:25:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.017 18:25:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:36.017 [2024-07-22 18:25:47.966164] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:36.017 18:25:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.017 18:25:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:18:36.017 18:25:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.017 18:25:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:36.017 18:25:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.017 18:25:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:36.017 18:25:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.017 18:25:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:36.017 [2024-07-22 18:25:47.997186] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:36.017 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.017 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:18:36.017 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.017 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:36.017 NULL1 00:18:36.017 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.017 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=82784 00:18:36.017 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:18:36.017 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:18:36.017 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:18:36.017 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:18:36.017 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:36.017 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:36.017 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:36.017 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:36.018 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:36.018 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:36.018 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:36.018 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:36.276 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:36.276 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:36.276 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:36.276 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:36.276 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:36.276 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:36.276 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:36.276 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:36.276 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:36.276 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:36.276 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:36.276 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:36.276 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:36.276 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:36.276 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:36.276 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:36.276 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:36.276 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:36.276 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:36.276 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:36.276 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:36.276 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:36.276 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:36.276 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:36.276 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:36.276 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:36.276 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:36.276 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:36.276 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:36.276 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:36.276 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:36.276 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:36.276 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 82784 00:18:36.276 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:36.276 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.276 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:36.603 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.603 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 82784 00:18:36.603 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:36.603 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.603 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:36.887 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.887 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 82784 00:18:36.887 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:36.887 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.887 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:37.146 18:25:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.146 18:25:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 82784 00:18:37.146 18:25:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:37.146 18:25:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.146 18:25:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:37.404 18:25:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.404 18:25:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 82784 00:18:37.404 18:25:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:37.404 18:25:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.404 18:25:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:37.970 18:25:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.970 18:25:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 82784 00:18:37.970 18:25:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:37.970 18:25:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.970 18:25:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:38.228 18:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.228 18:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 82784 00:18:38.228 18:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:38.228 18:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.228 18:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:38.486 18:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.486 18:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 82784 00:18:38.486 18:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:38.486 18:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.486 18:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:38.745 18:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.745 18:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 82784 00:18:38.745 18:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:38.745 18:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.745 18:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:39.312 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.312 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 82784 00:18:39.312 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:39.312 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.312 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:39.570 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.570 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 82784 00:18:39.570 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:39.570 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.570 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:39.828 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.828 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 82784 00:18:39.828 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:39.828 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.828 18:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:40.087 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.087 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 82784 00:18:40.087 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:40.087 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.087 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:40.345 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.345 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 82784 00:18:40.345 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:40.345 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.345 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:40.918 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.918 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 82784 00:18:40.918 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:40.918 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.918 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:41.176 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:41.176 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 82784 00:18:41.176 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:41.176 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:41.176 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:41.446 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:41.446 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 82784 00:18:41.446 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:41.446 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:41.446 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:41.708 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:41.708 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 82784 00:18:41.708 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:41.708 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:41.708 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:41.967 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:41.967 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 82784 00:18:41.967 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:41.967 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:41.967 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:42.534 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.534 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 82784 00:18:42.534 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:42.534 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.534 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:42.792 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.792 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 82784 00:18:42.792 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:42.792 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.792 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:43.052 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.052 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 82784 00:18:43.052 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:43.052 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.052 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:43.322 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.323 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 82784 00:18:43.323 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:43.323 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.323 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:43.889 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.889 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 82784 00:18:43.889 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:43.889 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.889 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:44.147 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.147 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 82784 00:18:44.147 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:44.147 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.147 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:44.406 18:25:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.406 18:25:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 82784 00:18:44.406 18:25:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:44.406 18:25:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.406 18:25:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:44.663 18:25:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.663 18:25:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 82784 00:18:44.663 18:25:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:44.663 18:25:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.664 18:25:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:44.921 18:25:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.921 18:25:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 82784 00:18:44.921 18:25:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:44.921 18:25:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.921 18:25:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:45.488 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.488 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 82784 00:18:45.488 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:45.488 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.488 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:45.747 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.747 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 82784 00:18:45.747 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:45.747 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.747 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:46.005 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.005 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 82784 00:18:46.005 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:46.005 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.005 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:46.263 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.263 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 82784 00:18:46.263 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:46.263 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.263 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:46.521 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:46.777 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.777 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 82784 00:18:46.777 /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (82784) - No such process 00:18:46.777 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 82784 00:18:46.777 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:18:46.777 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:18:46.777 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:18:46.777 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:46.777 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:18:46.777 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:46.777 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:18:46.777 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:46.777 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:46.777 rmmod nvme_tcp 00:18:46.777 rmmod nvme_fabrics 00:18:46.777 rmmod nvme_keyring 00:18:46.777 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:46.777 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:18:46.777 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:18:46.777 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 82732 ']' 00:18:46.777 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 82732 00:18:46.777 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@948 -- # '[' -z 82732 ']' 00:18:46.777 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@952 -- # kill -0 82732 00:18:46.777 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@953 -- # uname 00:18:46.777 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:46.777 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 82732 00:18:46.777 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:46.777 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:46.777 killing process with pid 82732 00:18:46.778 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 82732' 00:18:46.778 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@967 -- # kill 82732 00:18:46.778 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # wait 82732 00:18:48.151 18:25:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:48.151 18:25:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:48.151 18:25:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:48.151 18:25:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:48.151 18:25:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:48.151 18:25:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:48.151 18:25:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:48.151 18:25:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:48.151 18:25:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:48.151 00:18:48.151 real 0m13.611s 00:18:48.151 user 0m43.197s 00:18:48.151 sys 0m3.460s 00:18:48.151 18:25:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:48.151 18:25:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:48.151 ************************************ 00:18:48.151 END TEST nvmf_connect_stress 00:18:48.151 ************************************ 00:18:48.151 18:26:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:18:48.151 18:26:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:18:48.151 18:26:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:48.151 18:26:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:48.151 18:26:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:48.151 ************************************ 00:18:48.151 START TEST nvmf_fused_ordering 00:18:48.151 ************************************ 00:18:48.151 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:18:48.151 * Looking for test storage... 00:18:48.151 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:48.151 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:48.151 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:18:48.151 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:48.151 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:48.151 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:48.151 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:48.151 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:48.151 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:48.151 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:48.151 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:48.151 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:48.151 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:48.151 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:18:48.151 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=0b8484e2-e129-4a11-8748-0b3c728771da 00:18:48.151 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:48.151 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:48.151 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:48.151 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:48.151 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:48.151 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:48.151 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:48.151 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:48.151 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:48.152 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:48.152 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:48.152 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:18:48.152 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:48.152 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:18:48.152 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:48.152 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:48.152 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:48.152 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:48.152 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:48.152 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:48.152 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:48.152 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:48.152 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:18:48.152 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:48.152 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:48.152 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:48.152 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:48.152 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:48.152 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:48.152 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:48.152 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:48.152 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:48.152 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:48.152 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:48.152 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:48.152 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:48.152 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:48.152 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:48.152 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:48.152 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:48.152 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:48.152 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:48.152 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:48.152 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:48.152 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:48.152 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:48.152 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:48.152 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:48.152 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:48.152 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:48.152 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:48.410 Cannot find device "nvmf_tgt_br" 00:18:48.410 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@155 -- # true 00:18:48.410 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:48.410 Cannot find device "nvmf_tgt_br2" 00:18:48.410 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@156 -- # true 00:18:48.410 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:48.410 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:48.410 Cannot find device "nvmf_tgt_br" 00:18:48.410 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@158 -- # true 00:18:48.410 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:48.410 Cannot find device "nvmf_tgt_br2" 00:18:48.410 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@159 -- # true 00:18:48.410 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:48.410 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:48.410 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:48.410 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:48.410 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@162 -- # true 00:18:48.410 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:48.410 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:48.410 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@163 -- # true 00:18:48.410 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:48.410 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:48.410 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:48.410 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:48.410 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:48.410 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:48.410 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:48.410 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:48.410 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:48.410 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:48.410 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:48.410 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:48.410 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:48.410 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:48.410 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:48.410 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:48.410 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:48.410 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:48.410 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:48.410 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:48.410 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:48.671 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:48.671 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:48.671 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:48.671 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:48.671 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.090 ms 00:18:48.671 00:18:48.671 --- 10.0.0.2 ping statistics --- 00:18:48.671 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:48.671 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:18:48.671 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:48.671 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:48.671 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:18:48.671 00:18:48.671 --- 10.0.0.3 ping statistics --- 00:18:48.671 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:48.671 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:18:48.671 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:48.671 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:48.671 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:18:48.671 00:18:48.671 --- 10.0.0.1 ping statistics --- 00:18:48.671 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:48.671 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:18:48.671 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:48.671 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@433 -- # return 0 00:18:48.671 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:48.671 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:48.671 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:48.671 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:48.671 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:48.671 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:48.671 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:48.671 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:18:48.671 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:48.671 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:48.671 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:48.671 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=83126 00:18:48.671 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 83126 00:18:48.671 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@829 -- # '[' -z 83126 ']' 00:18:48.671 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:48.671 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:48.671 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:48.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:48.671 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:48.671 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:48.671 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:48.671 [2024-07-22 18:26:00.627481] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:18:48.671 [2024-07-22 18:26:00.627684] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:48.929 [2024-07-22 18:26:00.805912] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:49.187 [2024-07-22 18:26:01.044597] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:49.187 [2024-07-22 18:26:01.044683] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:49.187 [2024-07-22 18:26:01.044716] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:49.187 [2024-07-22 18:26:01.044731] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:49.187 [2024-07-22 18:26:01.044742] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:49.187 [2024-07-22 18:26:01.044790] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:49.769 18:26:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:49.769 18:26:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # return 0 00:18:49.769 18:26:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:49.769 18:26:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:49.769 18:26:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:49.769 18:26:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:49.769 18:26:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:49.769 18:26:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.769 18:26:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:49.769 [2024-07-22 18:26:01.601582] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:49.769 18:26:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.769 18:26:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:18:49.769 18:26:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.769 18:26:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:49.769 18:26:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.769 18:26:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:49.769 18:26:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.769 18:26:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:49.769 [2024-07-22 18:26:01.617751] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:49.769 18:26:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.769 18:26:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:18:49.769 18:26:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.769 18:26:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:49.769 NULL1 00:18:49.769 18:26:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.769 18:26:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:18:49.769 18:26:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.769 18:26:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:49.769 18:26:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.769 18:26:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:18:49.769 18:26:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.769 18:26:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:49.769 18:26:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.769 18:26:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:18:49.769 [2024-07-22 18:26:01.711555] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:18:49.769 [2024-07-22 18:26:01.711691] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83176 ] 00:18:50.336 Attached to nqn.2016-06.io.spdk:cnode1 00:18:50.336 Namespace ID: 1 size: 1GB 00:18:50.336 fused_ordering(0) 00:18:50.336 fused_ordering(1) 00:18:50.336 fused_ordering(2) 00:18:50.336 fused_ordering(3) 00:18:50.336 fused_ordering(4) 00:18:50.336 fused_ordering(5) 00:18:50.336 fused_ordering(6) 00:18:50.336 fused_ordering(7) 00:18:50.336 fused_ordering(8) 00:18:50.336 fused_ordering(9) 00:18:50.336 fused_ordering(10) 00:18:50.336 fused_ordering(11) 00:18:50.336 fused_ordering(12) 00:18:50.336 fused_ordering(13) 00:18:50.336 fused_ordering(14) 00:18:50.336 fused_ordering(15) 00:18:50.336 fused_ordering(16) 00:18:50.336 fused_ordering(17) 00:18:50.336 fused_ordering(18) 00:18:50.336 fused_ordering(19) 00:18:50.336 fused_ordering(20) 00:18:50.336 fused_ordering(21) 00:18:50.336 fused_ordering(22) 00:18:50.336 fused_ordering(23) 00:18:50.336 fused_ordering(24) 00:18:50.336 fused_ordering(25) 00:18:50.336 fused_ordering(26) 00:18:50.336 fused_ordering(27) 00:18:50.336 fused_ordering(28) 00:18:50.336 fused_ordering(29) 00:18:50.336 fused_ordering(30) 00:18:50.336 fused_ordering(31) 00:18:50.336 fused_ordering(32) 00:18:50.336 fused_ordering(33) 00:18:50.336 fused_ordering(34) 00:18:50.336 fused_ordering(35) 00:18:50.336 fused_ordering(36) 00:18:50.336 fused_ordering(37) 00:18:50.336 fused_ordering(38) 00:18:50.336 fused_ordering(39) 00:18:50.336 fused_ordering(40) 00:18:50.336 fused_ordering(41) 00:18:50.336 fused_ordering(42) 00:18:50.336 fused_ordering(43) 00:18:50.336 fused_ordering(44) 00:18:50.336 fused_ordering(45) 00:18:50.336 fused_ordering(46) 00:18:50.336 fused_ordering(47) 00:18:50.336 fused_ordering(48) 00:18:50.336 fused_ordering(49) 00:18:50.336 fused_ordering(50) 00:18:50.336 fused_ordering(51) 00:18:50.336 fused_ordering(52) 00:18:50.336 fused_ordering(53) 00:18:50.336 fused_ordering(54) 00:18:50.336 fused_ordering(55) 00:18:50.336 fused_ordering(56) 00:18:50.336 fused_ordering(57) 00:18:50.336 fused_ordering(58) 00:18:50.336 fused_ordering(59) 00:18:50.337 fused_ordering(60) 00:18:50.337 fused_ordering(61) 00:18:50.337 fused_ordering(62) 00:18:50.337 fused_ordering(63) 00:18:50.337 fused_ordering(64) 00:18:50.337 fused_ordering(65) 00:18:50.337 fused_ordering(66) 00:18:50.337 fused_ordering(67) 00:18:50.337 fused_ordering(68) 00:18:50.337 fused_ordering(69) 00:18:50.337 fused_ordering(70) 00:18:50.337 fused_ordering(71) 00:18:50.337 fused_ordering(72) 00:18:50.337 fused_ordering(73) 00:18:50.337 fused_ordering(74) 00:18:50.337 fused_ordering(75) 00:18:50.337 fused_ordering(76) 00:18:50.337 fused_ordering(77) 00:18:50.337 fused_ordering(78) 00:18:50.337 fused_ordering(79) 00:18:50.337 fused_ordering(80) 00:18:50.337 fused_ordering(81) 00:18:50.337 fused_ordering(82) 00:18:50.337 fused_ordering(83) 00:18:50.337 fused_ordering(84) 00:18:50.337 fused_ordering(85) 00:18:50.337 fused_ordering(86) 00:18:50.337 fused_ordering(87) 00:18:50.337 fused_ordering(88) 00:18:50.337 fused_ordering(89) 00:18:50.337 fused_ordering(90) 00:18:50.337 fused_ordering(91) 00:18:50.337 fused_ordering(92) 00:18:50.337 fused_ordering(93) 00:18:50.337 fused_ordering(94) 00:18:50.337 fused_ordering(95) 00:18:50.337 fused_ordering(96) 00:18:50.337 fused_ordering(97) 00:18:50.337 fused_ordering(98) 00:18:50.337 fused_ordering(99) 00:18:50.337 fused_ordering(100) 00:18:50.337 fused_ordering(101) 00:18:50.337 fused_ordering(102) 00:18:50.337 fused_ordering(103) 00:18:50.337 fused_ordering(104) 00:18:50.337 fused_ordering(105) 00:18:50.337 fused_ordering(106) 00:18:50.337 fused_ordering(107) 00:18:50.337 fused_ordering(108) 00:18:50.337 fused_ordering(109) 00:18:50.337 fused_ordering(110) 00:18:50.337 fused_ordering(111) 00:18:50.337 fused_ordering(112) 00:18:50.337 fused_ordering(113) 00:18:50.337 fused_ordering(114) 00:18:50.337 fused_ordering(115) 00:18:50.337 fused_ordering(116) 00:18:50.337 fused_ordering(117) 00:18:50.337 fused_ordering(118) 00:18:50.337 fused_ordering(119) 00:18:50.337 fused_ordering(120) 00:18:50.337 fused_ordering(121) 00:18:50.337 fused_ordering(122) 00:18:50.337 fused_ordering(123) 00:18:50.337 fused_ordering(124) 00:18:50.337 fused_ordering(125) 00:18:50.337 fused_ordering(126) 00:18:50.337 fused_ordering(127) 00:18:50.337 fused_ordering(128) 00:18:50.337 fused_ordering(129) 00:18:50.337 fused_ordering(130) 00:18:50.337 fused_ordering(131) 00:18:50.337 fused_ordering(132) 00:18:50.337 fused_ordering(133) 00:18:50.337 fused_ordering(134) 00:18:50.337 fused_ordering(135) 00:18:50.337 fused_ordering(136) 00:18:50.337 fused_ordering(137) 00:18:50.337 fused_ordering(138) 00:18:50.337 fused_ordering(139) 00:18:50.337 fused_ordering(140) 00:18:50.337 fused_ordering(141) 00:18:50.337 fused_ordering(142) 00:18:50.337 fused_ordering(143) 00:18:50.337 fused_ordering(144) 00:18:50.337 fused_ordering(145) 00:18:50.337 fused_ordering(146) 00:18:50.337 fused_ordering(147) 00:18:50.337 fused_ordering(148) 00:18:50.337 fused_ordering(149) 00:18:50.337 fused_ordering(150) 00:18:50.337 fused_ordering(151) 00:18:50.337 fused_ordering(152) 00:18:50.337 fused_ordering(153) 00:18:50.337 fused_ordering(154) 00:18:50.337 fused_ordering(155) 00:18:50.337 fused_ordering(156) 00:18:50.337 fused_ordering(157) 00:18:50.337 fused_ordering(158) 00:18:50.337 fused_ordering(159) 00:18:50.337 fused_ordering(160) 00:18:50.337 fused_ordering(161) 00:18:50.337 fused_ordering(162) 00:18:50.337 fused_ordering(163) 00:18:50.337 fused_ordering(164) 00:18:50.337 fused_ordering(165) 00:18:50.337 fused_ordering(166) 00:18:50.337 fused_ordering(167) 00:18:50.337 fused_ordering(168) 00:18:50.337 fused_ordering(169) 00:18:50.337 fused_ordering(170) 00:18:50.337 fused_ordering(171) 00:18:50.337 fused_ordering(172) 00:18:50.337 fused_ordering(173) 00:18:50.337 fused_ordering(174) 00:18:50.337 fused_ordering(175) 00:18:50.337 fused_ordering(176) 00:18:50.337 fused_ordering(177) 00:18:50.337 fused_ordering(178) 00:18:50.337 fused_ordering(179) 00:18:50.337 fused_ordering(180) 00:18:50.337 fused_ordering(181) 00:18:50.337 fused_ordering(182) 00:18:50.337 fused_ordering(183) 00:18:50.337 fused_ordering(184) 00:18:50.337 fused_ordering(185) 00:18:50.337 fused_ordering(186) 00:18:50.337 fused_ordering(187) 00:18:50.337 fused_ordering(188) 00:18:50.337 fused_ordering(189) 00:18:50.337 fused_ordering(190) 00:18:50.337 fused_ordering(191) 00:18:50.337 fused_ordering(192) 00:18:50.337 fused_ordering(193) 00:18:50.337 fused_ordering(194) 00:18:50.337 fused_ordering(195) 00:18:50.337 fused_ordering(196) 00:18:50.337 fused_ordering(197) 00:18:50.337 fused_ordering(198) 00:18:50.337 fused_ordering(199) 00:18:50.337 fused_ordering(200) 00:18:50.337 fused_ordering(201) 00:18:50.337 fused_ordering(202) 00:18:50.337 fused_ordering(203) 00:18:50.337 fused_ordering(204) 00:18:50.337 fused_ordering(205) 00:18:50.904 fused_ordering(206) 00:18:50.904 fused_ordering(207) 00:18:50.904 fused_ordering(208) 00:18:50.904 fused_ordering(209) 00:18:50.904 fused_ordering(210) 00:18:50.904 fused_ordering(211) 00:18:50.904 fused_ordering(212) 00:18:50.904 fused_ordering(213) 00:18:50.904 fused_ordering(214) 00:18:50.904 fused_ordering(215) 00:18:50.904 fused_ordering(216) 00:18:50.904 fused_ordering(217) 00:18:50.904 fused_ordering(218) 00:18:50.904 fused_ordering(219) 00:18:50.904 fused_ordering(220) 00:18:50.904 fused_ordering(221) 00:18:50.904 fused_ordering(222) 00:18:50.904 fused_ordering(223) 00:18:50.904 fused_ordering(224) 00:18:50.904 fused_ordering(225) 00:18:50.904 fused_ordering(226) 00:18:50.904 fused_ordering(227) 00:18:50.904 fused_ordering(228) 00:18:50.904 fused_ordering(229) 00:18:50.904 fused_ordering(230) 00:18:50.904 fused_ordering(231) 00:18:50.904 fused_ordering(232) 00:18:50.904 fused_ordering(233) 00:18:50.904 fused_ordering(234) 00:18:50.904 fused_ordering(235) 00:18:50.904 fused_ordering(236) 00:18:50.904 fused_ordering(237) 00:18:50.904 fused_ordering(238) 00:18:50.904 fused_ordering(239) 00:18:50.904 fused_ordering(240) 00:18:50.904 fused_ordering(241) 00:18:50.904 fused_ordering(242) 00:18:50.904 fused_ordering(243) 00:18:50.904 fused_ordering(244) 00:18:50.904 fused_ordering(245) 00:18:50.904 fused_ordering(246) 00:18:50.904 fused_ordering(247) 00:18:50.904 fused_ordering(248) 00:18:50.904 fused_ordering(249) 00:18:50.904 fused_ordering(250) 00:18:50.904 fused_ordering(251) 00:18:50.904 fused_ordering(252) 00:18:50.904 fused_ordering(253) 00:18:50.904 fused_ordering(254) 00:18:50.904 fused_ordering(255) 00:18:50.904 fused_ordering(256) 00:18:50.904 fused_ordering(257) 00:18:50.904 fused_ordering(258) 00:18:50.904 fused_ordering(259) 00:18:50.904 fused_ordering(260) 00:18:50.904 fused_ordering(261) 00:18:50.904 fused_ordering(262) 00:18:50.904 fused_ordering(263) 00:18:50.904 fused_ordering(264) 00:18:50.904 fused_ordering(265) 00:18:50.904 fused_ordering(266) 00:18:50.904 fused_ordering(267) 00:18:50.904 fused_ordering(268) 00:18:50.904 fused_ordering(269) 00:18:50.904 fused_ordering(270) 00:18:50.904 fused_ordering(271) 00:18:50.904 fused_ordering(272) 00:18:50.904 fused_ordering(273) 00:18:50.904 fused_ordering(274) 00:18:50.904 fused_ordering(275) 00:18:50.904 fused_ordering(276) 00:18:50.904 fused_ordering(277) 00:18:50.904 fused_ordering(278) 00:18:50.904 fused_ordering(279) 00:18:50.904 fused_ordering(280) 00:18:50.904 fused_ordering(281) 00:18:50.904 fused_ordering(282) 00:18:50.904 fused_ordering(283) 00:18:50.904 fused_ordering(284) 00:18:50.904 fused_ordering(285) 00:18:50.904 fused_ordering(286) 00:18:50.904 fused_ordering(287) 00:18:50.904 fused_ordering(288) 00:18:50.904 fused_ordering(289) 00:18:50.904 fused_ordering(290) 00:18:50.904 fused_ordering(291) 00:18:50.904 fused_ordering(292) 00:18:50.904 fused_ordering(293) 00:18:50.904 fused_ordering(294) 00:18:50.904 fused_ordering(295) 00:18:50.904 fused_ordering(296) 00:18:50.904 fused_ordering(297) 00:18:50.904 fused_ordering(298) 00:18:50.904 fused_ordering(299) 00:18:50.904 fused_ordering(300) 00:18:50.904 fused_ordering(301) 00:18:50.904 fused_ordering(302) 00:18:50.904 fused_ordering(303) 00:18:50.904 fused_ordering(304) 00:18:50.904 fused_ordering(305) 00:18:50.904 fused_ordering(306) 00:18:50.904 fused_ordering(307) 00:18:50.904 fused_ordering(308) 00:18:50.904 fused_ordering(309) 00:18:50.904 fused_ordering(310) 00:18:50.904 fused_ordering(311) 00:18:50.904 fused_ordering(312) 00:18:50.904 fused_ordering(313) 00:18:50.904 fused_ordering(314) 00:18:50.904 fused_ordering(315) 00:18:50.904 fused_ordering(316) 00:18:50.904 fused_ordering(317) 00:18:50.904 fused_ordering(318) 00:18:50.904 fused_ordering(319) 00:18:50.904 fused_ordering(320) 00:18:50.904 fused_ordering(321) 00:18:50.904 fused_ordering(322) 00:18:50.904 fused_ordering(323) 00:18:50.904 fused_ordering(324) 00:18:50.904 fused_ordering(325) 00:18:50.904 fused_ordering(326) 00:18:50.904 fused_ordering(327) 00:18:50.904 fused_ordering(328) 00:18:50.904 fused_ordering(329) 00:18:50.904 fused_ordering(330) 00:18:50.904 fused_ordering(331) 00:18:50.904 fused_ordering(332) 00:18:50.904 fused_ordering(333) 00:18:50.904 fused_ordering(334) 00:18:50.904 fused_ordering(335) 00:18:50.904 fused_ordering(336) 00:18:50.904 fused_ordering(337) 00:18:50.904 fused_ordering(338) 00:18:50.904 fused_ordering(339) 00:18:50.904 fused_ordering(340) 00:18:50.904 fused_ordering(341) 00:18:50.904 fused_ordering(342) 00:18:50.904 fused_ordering(343) 00:18:50.904 fused_ordering(344) 00:18:50.904 fused_ordering(345) 00:18:50.904 fused_ordering(346) 00:18:50.904 fused_ordering(347) 00:18:50.904 fused_ordering(348) 00:18:50.904 fused_ordering(349) 00:18:50.904 fused_ordering(350) 00:18:50.904 fused_ordering(351) 00:18:50.905 fused_ordering(352) 00:18:50.905 fused_ordering(353) 00:18:50.905 fused_ordering(354) 00:18:50.905 fused_ordering(355) 00:18:50.905 fused_ordering(356) 00:18:50.905 fused_ordering(357) 00:18:50.905 fused_ordering(358) 00:18:50.905 fused_ordering(359) 00:18:50.905 fused_ordering(360) 00:18:50.905 fused_ordering(361) 00:18:50.905 fused_ordering(362) 00:18:50.905 fused_ordering(363) 00:18:50.905 fused_ordering(364) 00:18:50.905 fused_ordering(365) 00:18:50.905 fused_ordering(366) 00:18:50.905 fused_ordering(367) 00:18:50.905 fused_ordering(368) 00:18:50.905 fused_ordering(369) 00:18:50.905 fused_ordering(370) 00:18:50.905 fused_ordering(371) 00:18:50.905 fused_ordering(372) 00:18:50.905 fused_ordering(373) 00:18:50.905 fused_ordering(374) 00:18:50.905 fused_ordering(375) 00:18:50.905 fused_ordering(376) 00:18:50.905 fused_ordering(377) 00:18:50.905 fused_ordering(378) 00:18:50.905 fused_ordering(379) 00:18:50.905 fused_ordering(380) 00:18:50.905 fused_ordering(381) 00:18:50.905 fused_ordering(382) 00:18:50.905 fused_ordering(383) 00:18:50.905 fused_ordering(384) 00:18:50.905 fused_ordering(385) 00:18:50.905 fused_ordering(386) 00:18:50.905 fused_ordering(387) 00:18:50.905 fused_ordering(388) 00:18:50.905 fused_ordering(389) 00:18:50.905 fused_ordering(390) 00:18:50.905 fused_ordering(391) 00:18:50.905 fused_ordering(392) 00:18:50.905 fused_ordering(393) 00:18:50.905 fused_ordering(394) 00:18:50.905 fused_ordering(395) 00:18:50.905 fused_ordering(396) 00:18:50.905 fused_ordering(397) 00:18:50.905 fused_ordering(398) 00:18:50.905 fused_ordering(399) 00:18:50.905 fused_ordering(400) 00:18:50.905 fused_ordering(401) 00:18:50.905 fused_ordering(402) 00:18:50.905 fused_ordering(403) 00:18:50.905 fused_ordering(404) 00:18:50.905 fused_ordering(405) 00:18:50.905 fused_ordering(406) 00:18:50.905 fused_ordering(407) 00:18:50.905 fused_ordering(408) 00:18:50.905 fused_ordering(409) 00:18:50.905 fused_ordering(410) 00:18:51.471 fused_ordering(411) 00:18:51.471 fused_ordering(412) 00:18:51.471 fused_ordering(413) 00:18:51.471 fused_ordering(414) 00:18:51.471 fused_ordering(415) 00:18:51.471 fused_ordering(416) 00:18:51.471 fused_ordering(417) 00:18:51.471 fused_ordering(418) 00:18:51.471 fused_ordering(419) 00:18:51.471 fused_ordering(420) 00:18:51.471 fused_ordering(421) 00:18:51.471 fused_ordering(422) 00:18:51.471 fused_ordering(423) 00:18:51.471 fused_ordering(424) 00:18:51.471 fused_ordering(425) 00:18:51.471 fused_ordering(426) 00:18:51.471 fused_ordering(427) 00:18:51.471 fused_ordering(428) 00:18:51.471 fused_ordering(429) 00:18:51.471 fused_ordering(430) 00:18:51.471 fused_ordering(431) 00:18:51.471 fused_ordering(432) 00:18:51.471 fused_ordering(433) 00:18:51.471 fused_ordering(434) 00:18:51.471 fused_ordering(435) 00:18:51.471 fused_ordering(436) 00:18:51.471 fused_ordering(437) 00:18:51.471 fused_ordering(438) 00:18:51.471 fused_ordering(439) 00:18:51.471 fused_ordering(440) 00:18:51.471 fused_ordering(441) 00:18:51.471 fused_ordering(442) 00:18:51.471 fused_ordering(443) 00:18:51.471 fused_ordering(444) 00:18:51.471 fused_ordering(445) 00:18:51.471 fused_ordering(446) 00:18:51.471 fused_ordering(447) 00:18:51.471 fused_ordering(448) 00:18:51.471 fused_ordering(449) 00:18:51.471 fused_ordering(450) 00:18:51.471 fused_ordering(451) 00:18:51.471 fused_ordering(452) 00:18:51.471 fused_ordering(453) 00:18:51.471 fused_ordering(454) 00:18:51.471 fused_ordering(455) 00:18:51.471 fused_ordering(456) 00:18:51.471 fused_ordering(457) 00:18:51.471 fused_ordering(458) 00:18:51.471 fused_ordering(459) 00:18:51.471 fused_ordering(460) 00:18:51.471 fused_ordering(461) 00:18:51.471 fused_ordering(462) 00:18:51.471 fused_ordering(463) 00:18:51.471 fused_ordering(464) 00:18:51.471 fused_ordering(465) 00:18:51.471 fused_ordering(466) 00:18:51.471 fused_ordering(467) 00:18:51.471 fused_ordering(468) 00:18:51.471 fused_ordering(469) 00:18:51.471 fused_ordering(470) 00:18:51.471 fused_ordering(471) 00:18:51.471 fused_ordering(472) 00:18:51.471 fused_ordering(473) 00:18:51.471 fused_ordering(474) 00:18:51.471 fused_ordering(475) 00:18:51.471 fused_ordering(476) 00:18:51.471 fused_ordering(477) 00:18:51.471 fused_ordering(478) 00:18:51.471 fused_ordering(479) 00:18:51.471 fused_ordering(480) 00:18:51.471 fused_ordering(481) 00:18:51.471 fused_ordering(482) 00:18:51.471 fused_ordering(483) 00:18:51.471 fused_ordering(484) 00:18:51.471 fused_ordering(485) 00:18:51.471 fused_ordering(486) 00:18:51.471 fused_ordering(487) 00:18:51.471 fused_ordering(488) 00:18:51.471 fused_ordering(489) 00:18:51.471 fused_ordering(490) 00:18:51.471 fused_ordering(491) 00:18:51.471 fused_ordering(492) 00:18:51.471 fused_ordering(493) 00:18:51.471 fused_ordering(494) 00:18:51.471 fused_ordering(495) 00:18:51.471 fused_ordering(496) 00:18:51.471 fused_ordering(497) 00:18:51.471 fused_ordering(498) 00:18:51.471 fused_ordering(499) 00:18:51.471 fused_ordering(500) 00:18:51.471 fused_ordering(501) 00:18:51.471 fused_ordering(502) 00:18:51.471 fused_ordering(503) 00:18:51.471 fused_ordering(504) 00:18:51.471 fused_ordering(505) 00:18:51.471 fused_ordering(506) 00:18:51.471 fused_ordering(507) 00:18:51.471 fused_ordering(508) 00:18:51.471 fused_ordering(509) 00:18:51.471 fused_ordering(510) 00:18:51.471 fused_ordering(511) 00:18:51.471 fused_ordering(512) 00:18:51.471 fused_ordering(513) 00:18:51.471 fused_ordering(514) 00:18:51.471 fused_ordering(515) 00:18:51.471 fused_ordering(516) 00:18:51.471 fused_ordering(517) 00:18:51.471 fused_ordering(518) 00:18:51.471 fused_ordering(519) 00:18:51.471 fused_ordering(520) 00:18:51.471 fused_ordering(521) 00:18:51.471 fused_ordering(522) 00:18:51.471 fused_ordering(523) 00:18:51.471 fused_ordering(524) 00:18:51.471 fused_ordering(525) 00:18:51.471 fused_ordering(526) 00:18:51.471 fused_ordering(527) 00:18:51.471 fused_ordering(528) 00:18:51.471 fused_ordering(529) 00:18:51.471 fused_ordering(530) 00:18:51.471 fused_ordering(531) 00:18:51.471 fused_ordering(532) 00:18:51.471 fused_ordering(533) 00:18:51.471 fused_ordering(534) 00:18:51.471 fused_ordering(535) 00:18:51.471 fused_ordering(536) 00:18:51.471 fused_ordering(537) 00:18:51.471 fused_ordering(538) 00:18:51.471 fused_ordering(539) 00:18:51.471 fused_ordering(540) 00:18:51.471 fused_ordering(541) 00:18:51.471 fused_ordering(542) 00:18:51.471 fused_ordering(543) 00:18:51.471 fused_ordering(544) 00:18:51.471 fused_ordering(545) 00:18:51.471 fused_ordering(546) 00:18:51.471 fused_ordering(547) 00:18:51.471 fused_ordering(548) 00:18:51.471 fused_ordering(549) 00:18:51.471 fused_ordering(550) 00:18:51.471 fused_ordering(551) 00:18:51.471 fused_ordering(552) 00:18:51.471 fused_ordering(553) 00:18:51.472 fused_ordering(554) 00:18:51.472 fused_ordering(555) 00:18:51.472 fused_ordering(556) 00:18:51.472 fused_ordering(557) 00:18:51.472 fused_ordering(558) 00:18:51.472 fused_ordering(559) 00:18:51.472 fused_ordering(560) 00:18:51.472 fused_ordering(561) 00:18:51.472 fused_ordering(562) 00:18:51.472 fused_ordering(563) 00:18:51.472 fused_ordering(564) 00:18:51.472 fused_ordering(565) 00:18:51.472 fused_ordering(566) 00:18:51.472 fused_ordering(567) 00:18:51.472 fused_ordering(568) 00:18:51.472 fused_ordering(569) 00:18:51.472 fused_ordering(570) 00:18:51.472 fused_ordering(571) 00:18:51.472 fused_ordering(572) 00:18:51.472 fused_ordering(573) 00:18:51.472 fused_ordering(574) 00:18:51.472 fused_ordering(575) 00:18:51.472 fused_ordering(576) 00:18:51.472 fused_ordering(577) 00:18:51.472 fused_ordering(578) 00:18:51.472 fused_ordering(579) 00:18:51.472 fused_ordering(580) 00:18:51.472 fused_ordering(581) 00:18:51.472 fused_ordering(582) 00:18:51.472 fused_ordering(583) 00:18:51.472 fused_ordering(584) 00:18:51.472 fused_ordering(585) 00:18:51.472 fused_ordering(586) 00:18:51.472 fused_ordering(587) 00:18:51.472 fused_ordering(588) 00:18:51.472 fused_ordering(589) 00:18:51.472 fused_ordering(590) 00:18:51.472 fused_ordering(591) 00:18:51.472 fused_ordering(592) 00:18:51.472 fused_ordering(593) 00:18:51.472 fused_ordering(594) 00:18:51.472 fused_ordering(595) 00:18:51.472 fused_ordering(596) 00:18:51.472 fused_ordering(597) 00:18:51.472 fused_ordering(598) 00:18:51.472 fused_ordering(599) 00:18:51.472 fused_ordering(600) 00:18:51.472 fused_ordering(601) 00:18:51.472 fused_ordering(602) 00:18:51.472 fused_ordering(603) 00:18:51.472 fused_ordering(604) 00:18:51.472 fused_ordering(605) 00:18:51.472 fused_ordering(606) 00:18:51.472 fused_ordering(607) 00:18:51.472 fused_ordering(608) 00:18:51.472 fused_ordering(609) 00:18:51.472 fused_ordering(610) 00:18:51.472 fused_ordering(611) 00:18:51.472 fused_ordering(612) 00:18:51.472 fused_ordering(613) 00:18:51.472 fused_ordering(614) 00:18:51.472 fused_ordering(615) 00:18:52.037 fused_ordering(616) 00:18:52.037 fused_ordering(617) 00:18:52.037 fused_ordering(618) 00:18:52.037 fused_ordering(619) 00:18:52.037 fused_ordering(620) 00:18:52.037 fused_ordering(621) 00:18:52.037 fused_ordering(622) 00:18:52.037 fused_ordering(623) 00:18:52.037 fused_ordering(624) 00:18:52.037 fused_ordering(625) 00:18:52.037 fused_ordering(626) 00:18:52.037 fused_ordering(627) 00:18:52.037 fused_ordering(628) 00:18:52.037 fused_ordering(629) 00:18:52.037 fused_ordering(630) 00:18:52.037 fused_ordering(631) 00:18:52.037 fused_ordering(632) 00:18:52.037 fused_ordering(633) 00:18:52.037 fused_ordering(634) 00:18:52.037 fused_ordering(635) 00:18:52.037 fused_ordering(636) 00:18:52.037 fused_ordering(637) 00:18:52.037 fused_ordering(638) 00:18:52.037 fused_ordering(639) 00:18:52.037 fused_ordering(640) 00:18:52.037 fused_ordering(641) 00:18:52.037 fused_ordering(642) 00:18:52.037 fused_ordering(643) 00:18:52.037 fused_ordering(644) 00:18:52.037 fused_ordering(645) 00:18:52.037 fused_ordering(646) 00:18:52.037 fused_ordering(647) 00:18:52.037 fused_ordering(648) 00:18:52.037 fused_ordering(649) 00:18:52.037 fused_ordering(650) 00:18:52.037 fused_ordering(651) 00:18:52.037 fused_ordering(652) 00:18:52.037 fused_ordering(653) 00:18:52.037 fused_ordering(654) 00:18:52.037 fused_ordering(655) 00:18:52.037 fused_ordering(656) 00:18:52.037 fused_ordering(657) 00:18:52.037 fused_ordering(658) 00:18:52.037 fused_ordering(659) 00:18:52.037 fused_ordering(660) 00:18:52.037 fused_ordering(661) 00:18:52.037 fused_ordering(662) 00:18:52.037 fused_ordering(663) 00:18:52.037 fused_ordering(664) 00:18:52.037 fused_ordering(665) 00:18:52.037 fused_ordering(666) 00:18:52.037 fused_ordering(667) 00:18:52.037 fused_ordering(668) 00:18:52.037 fused_ordering(669) 00:18:52.037 fused_ordering(670) 00:18:52.037 fused_ordering(671) 00:18:52.037 fused_ordering(672) 00:18:52.037 fused_ordering(673) 00:18:52.037 fused_ordering(674) 00:18:52.037 fused_ordering(675) 00:18:52.037 fused_ordering(676) 00:18:52.037 fused_ordering(677) 00:18:52.037 fused_ordering(678) 00:18:52.037 fused_ordering(679) 00:18:52.037 fused_ordering(680) 00:18:52.037 fused_ordering(681) 00:18:52.037 fused_ordering(682) 00:18:52.037 fused_ordering(683) 00:18:52.037 fused_ordering(684) 00:18:52.037 fused_ordering(685) 00:18:52.037 fused_ordering(686) 00:18:52.037 fused_ordering(687) 00:18:52.037 fused_ordering(688) 00:18:52.037 fused_ordering(689) 00:18:52.037 fused_ordering(690) 00:18:52.037 fused_ordering(691) 00:18:52.037 fused_ordering(692) 00:18:52.037 fused_ordering(693) 00:18:52.037 fused_ordering(694) 00:18:52.037 fused_ordering(695) 00:18:52.037 fused_ordering(696) 00:18:52.037 fused_ordering(697) 00:18:52.037 fused_ordering(698) 00:18:52.037 fused_ordering(699) 00:18:52.037 fused_ordering(700) 00:18:52.037 fused_ordering(701) 00:18:52.037 fused_ordering(702) 00:18:52.037 fused_ordering(703) 00:18:52.037 fused_ordering(704) 00:18:52.037 fused_ordering(705) 00:18:52.037 fused_ordering(706) 00:18:52.037 fused_ordering(707) 00:18:52.037 fused_ordering(708) 00:18:52.037 fused_ordering(709) 00:18:52.037 fused_ordering(710) 00:18:52.037 fused_ordering(711) 00:18:52.037 fused_ordering(712) 00:18:52.037 fused_ordering(713) 00:18:52.037 fused_ordering(714) 00:18:52.037 fused_ordering(715) 00:18:52.037 fused_ordering(716) 00:18:52.037 fused_ordering(717) 00:18:52.037 fused_ordering(718) 00:18:52.037 fused_ordering(719) 00:18:52.037 fused_ordering(720) 00:18:52.037 fused_ordering(721) 00:18:52.037 fused_ordering(722) 00:18:52.037 fused_ordering(723) 00:18:52.037 fused_ordering(724) 00:18:52.037 fused_ordering(725) 00:18:52.037 fused_ordering(726) 00:18:52.037 fused_ordering(727) 00:18:52.037 fused_ordering(728) 00:18:52.037 fused_ordering(729) 00:18:52.037 fused_ordering(730) 00:18:52.037 fused_ordering(731) 00:18:52.037 fused_ordering(732) 00:18:52.037 fused_ordering(733) 00:18:52.037 fused_ordering(734) 00:18:52.037 fused_ordering(735) 00:18:52.037 fused_ordering(736) 00:18:52.037 fused_ordering(737) 00:18:52.037 fused_ordering(738) 00:18:52.037 fused_ordering(739) 00:18:52.037 fused_ordering(740) 00:18:52.037 fused_ordering(741) 00:18:52.037 fused_ordering(742) 00:18:52.037 fused_ordering(743) 00:18:52.037 fused_ordering(744) 00:18:52.037 fused_ordering(745) 00:18:52.037 fused_ordering(746) 00:18:52.037 fused_ordering(747) 00:18:52.037 fused_ordering(748) 00:18:52.037 fused_ordering(749) 00:18:52.037 fused_ordering(750) 00:18:52.037 fused_ordering(751) 00:18:52.037 fused_ordering(752) 00:18:52.037 fused_ordering(753) 00:18:52.037 fused_ordering(754) 00:18:52.037 fused_ordering(755) 00:18:52.037 fused_ordering(756) 00:18:52.037 fused_ordering(757) 00:18:52.037 fused_ordering(758) 00:18:52.037 fused_ordering(759) 00:18:52.037 fused_ordering(760) 00:18:52.037 fused_ordering(761) 00:18:52.037 fused_ordering(762) 00:18:52.037 fused_ordering(763) 00:18:52.037 fused_ordering(764) 00:18:52.037 fused_ordering(765) 00:18:52.037 fused_ordering(766) 00:18:52.037 fused_ordering(767) 00:18:52.037 fused_ordering(768) 00:18:52.037 fused_ordering(769) 00:18:52.037 fused_ordering(770) 00:18:52.037 fused_ordering(771) 00:18:52.037 fused_ordering(772) 00:18:52.037 fused_ordering(773) 00:18:52.037 fused_ordering(774) 00:18:52.037 fused_ordering(775) 00:18:52.037 fused_ordering(776) 00:18:52.037 fused_ordering(777) 00:18:52.037 fused_ordering(778) 00:18:52.037 fused_ordering(779) 00:18:52.037 fused_ordering(780) 00:18:52.037 fused_ordering(781) 00:18:52.037 fused_ordering(782) 00:18:52.037 fused_ordering(783) 00:18:52.037 fused_ordering(784) 00:18:52.037 fused_ordering(785) 00:18:52.037 fused_ordering(786) 00:18:52.037 fused_ordering(787) 00:18:52.037 fused_ordering(788) 00:18:52.037 fused_ordering(789) 00:18:52.037 fused_ordering(790) 00:18:52.037 fused_ordering(791) 00:18:52.037 fused_ordering(792) 00:18:52.037 fused_ordering(793) 00:18:52.037 fused_ordering(794) 00:18:52.037 fused_ordering(795) 00:18:52.037 fused_ordering(796) 00:18:52.037 fused_ordering(797) 00:18:52.037 fused_ordering(798) 00:18:52.037 fused_ordering(799) 00:18:52.037 fused_ordering(800) 00:18:52.037 fused_ordering(801) 00:18:52.037 fused_ordering(802) 00:18:52.038 fused_ordering(803) 00:18:52.038 fused_ordering(804) 00:18:52.038 fused_ordering(805) 00:18:52.038 fused_ordering(806) 00:18:52.038 fused_ordering(807) 00:18:52.038 fused_ordering(808) 00:18:52.038 fused_ordering(809) 00:18:52.038 fused_ordering(810) 00:18:52.038 fused_ordering(811) 00:18:52.038 fused_ordering(812) 00:18:52.038 fused_ordering(813) 00:18:52.038 fused_ordering(814) 00:18:52.038 fused_ordering(815) 00:18:52.038 fused_ordering(816) 00:18:52.038 fused_ordering(817) 00:18:52.038 fused_ordering(818) 00:18:52.038 fused_ordering(819) 00:18:52.038 fused_ordering(820) 00:18:52.603 fused_ordering(821) 00:18:52.604 fused_ordering(822) 00:18:52.604 fused_ordering(823) 00:18:52.604 fused_ordering(824) 00:18:52.604 fused_ordering(825) 00:18:52.604 fused_ordering(826) 00:18:52.604 fused_ordering(827) 00:18:52.604 fused_ordering(828) 00:18:52.604 fused_ordering(829) 00:18:52.604 fused_ordering(830) 00:18:52.604 fused_ordering(831) 00:18:52.604 fused_ordering(832) 00:18:52.604 fused_ordering(833) 00:18:52.604 fused_ordering(834) 00:18:52.604 fused_ordering(835) 00:18:52.604 fused_ordering(836) 00:18:52.604 fused_ordering(837) 00:18:52.604 fused_ordering(838) 00:18:52.604 fused_ordering(839) 00:18:52.604 fused_ordering(840) 00:18:52.604 fused_ordering(841) 00:18:52.604 fused_ordering(842) 00:18:52.604 fused_ordering(843) 00:18:52.604 fused_ordering(844) 00:18:52.604 fused_ordering(845) 00:18:52.604 fused_ordering(846) 00:18:52.604 fused_ordering(847) 00:18:52.604 fused_ordering(848) 00:18:52.604 fused_ordering(849) 00:18:52.604 fused_ordering(850) 00:18:52.604 fused_ordering(851) 00:18:52.604 fused_ordering(852) 00:18:52.604 fused_ordering(853) 00:18:52.604 fused_ordering(854) 00:18:52.604 fused_ordering(855) 00:18:52.604 fused_ordering(856) 00:18:52.604 fused_ordering(857) 00:18:52.604 fused_ordering(858) 00:18:52.604 fused_ordering(859) 00:18:52.604 fused_ordering(860) 00:18:52.604 fused_ordering(861) 00:18:52.604 fused_ordering(862) 00:18:52.604 fused_ordering(863) 00:18:52.604 fused_ordering(864) 00:18:52.604 fused_ordering(865) 00:18:52.604 fused_ordering(866) 00:18:52.604 fused_ordering(867) 00:18:52.604 fused_ordering(868) 00:18:52.604 fused_ordering(869) 00:18:52.604 fused_ordering(870) 00:18:52.604 fused_ordering(871) 00:18:52.604 fused_ordering(872) 00:18:52.604 fused_ordering(873) 00:18:52.604 fused_ordering(874) 00:18:52.604 fused_ordering(875) 00:18:52.604 fused_ordering(876) 00:18:52.604 fused_ordering(877) 00:18:52.604 fused_ordering(878) 00:18:52.604 fused_ordering(879) 00:18:52.604 fused_ordering(880) 00:18:52.604 fused_ordering(881) 00:18:52.604 fused_ordering(882) 00:18:52.604 fused_ordering(883) 00:18:52.604 fused_ordering(884) 00:18:52.604 fused_ordering(885) 00:18:52.604 fused_ordering(886) 00:18:52.604 fused_ordering(887) 00:18:52.604 fused_ordering(888) 00:18:52.604 fused_ordering(889) 00:18:52.604 fused_ordering(890) 00:18:52.604 fused_ordering(891) 00:18:52.604 fused_ordering(892) 00:18:52.604 fused_ordering(893) 00:18:52.604 fused_ordering(894) 00:18:52.604 fused_ordering(895) 00:18:52.604 fused_ordering(896) 00:18:52.604 fused_ordering(897) 00:18:52.604 fused_ordering(898) 00:18:52.604 fused_ordering(899) 00:18:52.604 fused_ordering(900) 00:18:52.604 fused_ordering(901) 00:18:52.604 fused_ordering(902) 00:18:52.604 fused_ordering(903) 00:18:52.604 fused_ordering(904) 00:18:52.604 fused_ordering(905) 00:18:52.604 fused_ordering(906) 00:18:52.604 fused_ordering(907) 00:18:52.604 fused_ordering(908) 00:18:52.604 fused_ordering(909) 00:18:52.604 fused_ordering(910) 00:18:52.604 fused_ordering(911) 00:18:52.604 fused_ordering(912) 00:18:52.604 fused_ordering(913) 00:18:52.604 fused_ordering(914) 00:18:52.604 fused_ordering(915) 00:18:52.604 fused_ordering(916) 00:18:52.604 fused_ordering(917) 00:18:52.604 fused_ordering(918) 00:18:52.604 fused_ordering(919) 00:18:52.604 fused_ordering(920) 00:18:52.604 fused_ordering(921) 00:18:52.604 fused_ordering(922) 00:18:52.604 fused_ordering(923) 00:18:52.604 fused_ordering(924) 00:18:52.604 fused_ordering(925) 00:18:52.604 fused_ordering(926) 00:18:52.604 fused_ordering(927) 00:18:52.604 fused_ordering(928) 00:18:52.604 fused_ordering(929) 00:18:52.604 fused_ordering(930) 00:18:52.604 fused_ordering(931) 00:18:52.604 fused_ordering(932) 00:18:52.604 fused_ordering(933) 00:18:52.604 fused_ordering(934) 00:18:52.604 fused_ordering(935) 00:18:52.604 fused_ordering(936) 00:18:52.604 fused_ordering(937) 00:18:52.604 fused_ordering(938) 00:18:52.604 fused_ordering(939) 00:18:52.604 fused_ordering(940) 00:18:52.604 fused_ordering(941) 00:18:52.604 fused_ordering(942) 00:18:52.604 fused_ordering(943) 00:18:52.604 fused_ordering(944) 00:18:52.604 fused_ordering(945) 00:18:52.604 fused_ordering(946) 00:18:52.604 fused_ordering(947) 00:18:52.604 fused_ordering(948) 00:18:52.604 fused_ordering(949) 00:18:52.604 fused_ordering(950) 00:18:52.604 fused_ordering(951) 00:18:52.604 fused_ordering(952) 00:18:52.604 fused_ordering(953) 00:18:52.604 fused_ordering(954) 00:18:52.604 fused_ordering(955) 00:18:52.604 fused_ordering(956) 00:18:52.604 fused_ordering(957) 00:18:52.604 fused_ordering(958) 00:18:52.604 fused_ordering(959) 00:18:52.604 fused_ordering(960) 00:18:52.604 fused_ordering(961) 00:18:52.604 fused_ordering(962) 00:18:52.604 fused_ordering(963) 00:18:52.604 fused_ordering(964) 00:18:52.604 fused_ordering(965) 00:18:52.604 fused_ordering(966) 00:18:52.604 fused_ordering(967) 00:18:52.604 fused_ordering(968) 00:18:52.604 fused_ordering(969) 00:18:52.604 fused_ordering(970) 00:18:52.604 fused_ordering(971) 00:18:52.604 fused_ordering(972) 00:18:52.604 fused_ordering(973) 00:18:52.604 fused_ordering(974) 00:18:52.604 fused_ordering(975) 00:18:52.604 fused_ordering(976) 00:18:52.604 fused_ordering(977) 00:18:52.604 fused_ordering(978) 00:18:52.604 fused_ordering(979) 00:18:52.604 fused_ordering(980) 00:18:52.604 fused_ordering(981) 00:18:52.604 fused_ordering(982) 00:18:52.604 fused_ordering(983) 00:18:52.604 fused_ordering(984) 00:18:52.604 fused_ordering(985) 00:18:52.604 fused_ordering(986) 00:18:52.604 fused_ordering(987) 00:18:52.604 fused_ordering(988) 00:18:52.604 fused_ordering(989) 00:18:52.604 fused_ordering(990) 00:18:52.604 fused_ordering(991) 00:18:52.604 fused_ordering(992) 00:18:52.604 fused_ordering(993) 00:18:52.604 fused_ordering(994) 00:18:52.604 fused_ordering(995) 00:18:52.604 fused_ordering(996) 00:18:52.604 fused_ordering(997) 00:18:52.604 fused_ordering(998) 00:18:52.604 fused_ordering(999) 00:18:52.604 fused_ordering(1000) 00:18:52.604 fused_ordering(1001) 00:18:52.604 fused_ordering(1002) 00:18:52.604 fused_ordering(1003) 00:18:52.604 fused_ordering(1004) 00:18:52.604 fused_ordering(1005) 00:18:52.604 fused_ordering(1006) 00:18:52.604 fused_ordering(1007) 00:18:52.604 fused_ordering(1008) 00:18:52.604 fused_ordering(1009) 00:18:52.604 fused_ordering(1010) 00:18:52.604 fused_ordering(1011) 00:18:52.604 fused_ordering(1012) 00:18:52.604 fused_ordering(1013) 00:18:52.604 fused_ordering(1014) 00:18:52.604 fused_ordering(1015) 00:18:52.604 fused_ordering(1016) 00:18:52.604 fused_ordering(1017) 00:18:52.604 fused_ordering(1018) 00:18:52.604 fused_ordering(1019) 00:18:52.604 fused_ordering(1020) 00:18:52.604 fused_ordering(1021) 00:18:52.604 fused_ordering(1022) 00:18:52.604 fused_ordering(1023) 00:18:52.604 18:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:18:52.604 18:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:18:52.604 18:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:52.604 18:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:18:52.872 18:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:52.872 18:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:18:52.872 18:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:52.872 18:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:52.872 rmmod nvme_tcp 00:18:52.872 rmmod nvme_fabrics 00:18:52.872 rmmod nvme_keyring 00:18:52.872 18:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:52.872 18:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:18:52.872 18:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:18:52.872 18:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 83126 ']' 00:18:52.872 18:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 83126 00:18:52.873 18:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # '[' -z 83126 ']' 00:18:52.873 18:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # kill -0 83126 00:18:52.873 18:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # uname 00:18:52.873 18:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:52.873 18:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 83126 00:18:52.873 18:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:52.873 18:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:52.873 killing process with pid 83126 00:18:52.873 18:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # echo 'killing process with pid 83126' 00:18:52.873 18:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # kill 83126 00:18:52.873 18:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # wait 83126 00:18:54.274 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:54.274 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:54.274 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:54.274 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:54.274 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:54.274 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:54.274 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:54.274 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:54.274 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:54.274 00:18:54.274 real 0m5.909s 00:18:54.274 user 0m6.905s 00:18:54.274 sys 0m1.951s 00:18:54.274 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:54.274 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:54.274 ************************************ 00:18:54.274 END TEST nvmf_fused_ordering 00:18:54.274 ************************************ 00:18:54.274 18:26:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:18:54.274 18:26:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:18:54.274 18:26:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:54.274 18:26:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:54.274 18:26:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:54.274 ************************************ 00:18:54.274 START TEST nvmf_ns_masking 00:18:54.274 ************************************ 00:18:54.274 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:18:54.274 * Looking for test storage... 00:18:54.274 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:54.274 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:54.274 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:18:54.274 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:54.274 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:54.274 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:54.274 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:54.274 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:54.274 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:54.274 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:54.274 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:54.274 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:54.274 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:54.274 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:18:54.274 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=0b8484e2-e129-4a11-8748-0b3c728771da 00:18:54.274 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:54.274 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:54.274 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:54.274 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:54.274 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:54.274 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:54.274 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:54.274 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:54.275 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:54.275 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:54.275 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:54.275 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:18:54.275 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:54.275 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:18:54.275 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:54.275 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:54.275 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:54.275 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:54.275 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:54.275 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:54.275 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:54.275 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:54.275 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:54.275 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:18:54.275 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:18:54.275 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:18:54.275 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=fbb01710-df97-452a-a920-3d89d3453854 00:18:54.275 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:18:54.275 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=5002875f-3def-41e7-9585-28e00caec000 00:18:54.275 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:18:54.275 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:18:54.275 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:18:54.275 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:18:54.275 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=319fe9fe-6c67-4070-aa87-5aa0e5fcadc1 00:18:54.275 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:18:54.275 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:54.275 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:54.275 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:54.275 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:54.275 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:54.275 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:54.275 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:54.275 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:54.275 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:54.275 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:54.275 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:54.275 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:54.275 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:54.275 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:54.275 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:54.275 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:54.275 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:54.275 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:54.275 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:54.275 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:54.275 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:54.275 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:54.275 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:54.275 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:54.275 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:54.275 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:54.275 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:54.275 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:54.275 Cannot find device "nvmf_tgt_br" 00:18:54.275 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@155 -- # true 00:18:54.275 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:54.275 Cannot find device "nvmf_tgt_br2" 00:18:54.275 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@156 -- # true 00:18:54.275 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:54.275 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:54.275 Cannot find device "nvmf_tgt_br" 00:18:54.275 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@158 -- # true 00:18:54.275 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:54.275 Cannot find device "nvmf_tgt_br2" 00:18:54.275 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@159 -- # true 00:18:54.275 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:54.275 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:54.275 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:54.275 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:54.275 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@162 -- # true 00:18:54.275 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:54.275 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:54.275 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@163 -- # true 00:18:54.275 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:54.275 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:54.275 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:54.275 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:54.535 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:54.535 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:54.535 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:54.535 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:54.535 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:54.535 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:54.535 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:54.535 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:54.535 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:54.535 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:54.535 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:54.535 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:54.535 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:54.535 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:54.535 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:54.535 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:54.535 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:54.535 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:54.535 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:54.535 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:54.535 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:54.535 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.093 ms 00:18:54.535 00:18:54.535 --- 10.0.0.2 ping statistics --- 00:18:54.535 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:54.535 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:18:54.535 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:54.535 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:54.535 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:18:54.535 00:18:54.535 --- 10.0.0.3 ping statistics --- 00:18:54.535 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:54.535 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:18:54.535 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:54.535 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:54.535 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:18:54.535 00:18:54.535 --- 10.0.0.1 ping statistics --- 00:18:54.535 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:54.535 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:18:54.535 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:54.535 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@433 -- # return 0 00:18:54.535 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:54.535 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:54.535 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:54.535 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:54.535 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:54.535 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:54.535 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:54.535 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:18:54.535 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:54.535 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:54.535 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:54.535 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=83408 00:18:54.535 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:54.535 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 83408 00:18:54.535 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 83408 ']' 00:18:54.535 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:54.535 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:54.535 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:54.535 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:54.535 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:54.535 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:54.794 [2024-07-22 18:26:06.620401] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:18:54.794 [2024-07-22 18:26:06.620547] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:54.794 [2024-07-22 18:26:06.793190] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:55.361 [2024-07-22 18:26:07.089397] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:55.361 [2024-07-22 18:26:07.089677] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:55.361 [2024-07-22 18:26:07.089706] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:55.361 [2024-07-22 18:26:07.089723] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:55.361 [2024-07-22 18:26:07.089735] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:55.361 [2024-07-22 18:26:07.089788] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:55.619 18:26:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:55.619 18:26:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:18:55.619 18:26:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:55.619 18:26:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:55.619 18:26:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:55.619 18:26:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:55.619 18:26:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:55.878 [2024-07-22 18:26:07.892041] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:56.136 18:26:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:18:56.136 18:26:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:18:56.136 18:26:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:18:56.394 Malloc1 00:18:56.394 18:26:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:18:56.652 Malloc2 00:18:56.652 18:26:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:56.911 18:26:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:18:57.169 18:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:57.427 [2024-07-22 18:26:09.385189] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:57.427 18:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:18:57.427 18:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 319fe9fe-6c67-4070-aa87-5aa0e5fcadc1 -a 10.0.0.2 -s 4420 -i 4 00:18:57.685 18:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:18:57.685 18:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:18:57.685 18:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:18:57.685 18:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:18:57.685 18:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:18:59.586 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:18:59.586 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:18:59.586 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:18:59.586 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:18:59.586 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:18:59.586 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:18:59.586 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:18:59.586 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:18:59.586 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:18:59.846 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:18:59.846 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:18:59.846 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:59.846 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:59.846 [ 0]:0x1 00:18:59.846 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:59.846 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:59.846 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=3121857d4aa64deab48f2cfab55e1c84 00:18:59.846 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 3121857d4aa64deab48f2cfab55e1c84 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:59.846 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:19:00.104 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:19:00.104 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:00.104 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:00.104 [ 0]:0x1 00:19:00.104 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:00.104 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:00.104 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=3121857d4aa64deab48f2cfab55e1c84 00:19:00.104 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 3121857d4aa64deab48f2cfab55e1c84 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:00.104 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:19:00.104 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:00.104 18:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:00.104 [ 1]:0x2 00:19:00.104 18:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:00.104 18:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:00.104 18:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=da2bad70f0714c2186ae39b1effa53a0 00:19:00.104 18:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ da2bad70f0714c2186ae39b1effa53a0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:00.104 18:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:19:00.104 18:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:00.104 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:00.361 18:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:00.619 18:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:19:00.877 18:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:19:00.877 18:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 319fe9fe-6c67-4070-aa87-5aa0e5fcadc1 -a 10.0.0.2 -s 4420 -i 4 00:19:00.877 18:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:19:00.877 18:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:19:00.877 18:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:19:00.877 18:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:19:00.877 18:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:19:00.877 18:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:19:03.405 18:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:19:03.405 18:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:19:03.405 18:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:19:03.405 18:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:19:03.405 18:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:19:03.405 18:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:19:03.405 18:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:19:03.405 18:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:19:03.405 18:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:19:03.405 18:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:19:03.405 18:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:19:03.405 18:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:19:03.405 18:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:19:03.405 18:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:19:03.405 18:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:03.405 18:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:19:03.405 18:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:03.405 18:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:19:03.405 18:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:03.405 18:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:03.405 18:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:03.405 18:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:03.405 18:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:19:03.405 18:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:03.405 18:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:19:03.405 18:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:03.405 18:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:03.405 18:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:03.405 18:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:19:03.405 18:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:03.405 18:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:03.405 [ 0]:0x2 00:19:03.405 18:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:03.405 18:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:03.405 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=da2bad70f0714c2186ae39b1effa53a0 00:19:03.405 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ da2bad70f0714c2186ae39b1effa53a0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:03.405 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:19:03.405 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:19:03.405 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:03.405 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:03.405 [ 0]:0x1 00:19:03.405 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:03.405 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:03.664 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=3121857d4aa64deab48f2cfab55e1c84 00:19:03.664 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 3121857d4aa64deab48f2cfab55e1c84 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:03.664 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:19:03.664 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:03.664 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:03.664 [ 1]:0x2 00:19:03.664 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:03.664 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:03.664 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=da2bad70f0714c2186ae39b1effa53a0 00:19:03.664 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ da2bad70f0714c2186ae39b1effa53a0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:03.664 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:19:03.922 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:19:03.922 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:19:03.922 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:19:03.922 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:19:03.922 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:03.922 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:19:03.922 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:03.922 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:19:03.922 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:03.922 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:03.922 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:03.922 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:03.922 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:19:03.922 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:03.922 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:19:03.922 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:03.922 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:03.922 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:03.922 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:19:03.922 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:03.922 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:03.922 [ 0]:0x2 00:19:03.922 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:03.922 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:03.922 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=da2bad70f0714c2186ae39b1effa53a0 00:19:03.922 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ da2bad70f0714c2186ae39b1effa53a0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:03.922 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:19:03.922 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:04.180 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:04.180 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:19:04.438 18:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:19:04.438 18:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 319fe9fe-6c67-4070-aa87-5aa0e5fcadc1 -a 10.0.0.2 -s 4420 -i 4 00:19:04.438 18:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:19:04.438 18:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:19:04.438 18:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:19:04.438 18:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:19:04.438 18:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:19:04.438 18:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:19:06.338 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:19:06.338 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:19:06.338 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:19:06.658 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:19:06.658 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:19:06.658 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:19:06.658 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:19:06.658 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:19:06.658 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:19:06.658 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:19:06.658 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:19:06.658 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:06.658 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:06.658 [ 0]:0x1 00:19:06.658 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:06.658 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:06.658 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=3121857d4aa64deab48f2cfab55e1c84 00:19:06.658 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 3121857d4aa64deab48f2cfab55e1c84 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:06.658 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:19:06.658 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:06.658 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:06.658 [ 1]:0x2 00:19:06.658 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:06.658 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:06.658 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=da2bad70f0714c2186ae39b1effa53a0 00:19:06.658 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ da2bad70f0714c2186ae39b1effa53a0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:06.658 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:19:06.916 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:19:06.916 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:19:06.916 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:19:06.916 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:19:06.916 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:06.916 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:19:06.916 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:06.916 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:19:06.916 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:06.916 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:06.916 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:06.916 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:06.916 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:19:06.916 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:06.916 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:19:06.916 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:06.916 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:06.916 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:06.916 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:19:06.916 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:06.916 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:06.916 [ 0]:0x2 00:19:06.916 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:06.916 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:07.174 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=da2bad70f0714c2186ae39b1effa53a0 00:19:07.174 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ da2bad70f0714c2186ae39b1effa53a0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:07.174 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:19:07.174 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:19:07.174 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:19:07.174 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:07.175 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:07.175 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:07.175 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:07.175 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:07.175 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:07.175 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:07.175 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:19:07.175 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:19:07.433 [2024-07-22 18:26:19.231629] nvmf_rpc.c:1798:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:19:07.433 2024/07/22 18:26:19 error on JSON-RPC call, method: nvmf_ns_remove_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 nsid:2], err: error received for nvmf_ns_remove_host method, err: Code=-32602 Msg=Invalid parameters 00:19:07.433 request: 00:19:07.433 { 00:19:07.433 "method": "nvmf_ns_remove_host", 00:19:07.433 "params": { 00:19:07.433 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:07.433 "nsid": 2, 00:19:07.433 "host": "nqn.2016-06.io.spdk:host1" 00:19:07.433 } 00:19:07.433 } 00:19:07.433 Got JSON-RPC error response 00:19:07.433 GoRPCClient: error on JSON-RPC call 00:19:07.433 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:19:07.433 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:07.433 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:07.433 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:07.433 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:19:07.433 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:19:07.433 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:19:07.433 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:19:07.433 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:07.433 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:19:07.433 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:07.433 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:19:07.433 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:07.433 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:07.433 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:07.433 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:07.433 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:19:07.433 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:07.433 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:19:07.433 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:07.433 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:07.433 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:07.433 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:19:07.433 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:07.433 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:07.433 [ 0]:0x2 00:19:07.433 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:07.433 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:07.433 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=da2bad70f0714c2186ae39b1effa53a0 00:19:07.433 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ da2bad70f0714c2186ae39b1effa53a0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:07.433 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:19:07.433 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:07.433 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:07.433 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=83786 00:19:07.433 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:19:07.433 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:19:07.433 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 83786 /var/tmp/host.sock 00:19:07.433 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 83786 ']' 00:19:07.433 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:19:07.434 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:07.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:19:07.434 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:19:07.434 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:07.434 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:19:07.692 [2024-07-22 18:26:19.541539] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:19:07.692 [2024-07-22 18:26:19.541714] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83786 ] 00:19:07.950 [2024-07-22 18:26:19.711590] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:08.208 [2024-07-22 18:26:19.981208] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:08.774 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:08.774 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:19:08.774 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:09.033 18:26:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:19:09.599 18:26:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid fbb01710-df97-452a-a920-3d89d3453854 00:19:09.599 18:26:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:19:09.599 18:26:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g FBB01710DF97452AA9203D89D3453854 -i 00:19:09.599 18:26:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 5002875f-3def-41e7-9585-28e00caec000 00:19:09.599 18:26:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:19:09.857 18:26:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 5002875F3DEF41E7958528E00CAEC000 -i 00:19:10.117 18:26:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:19:10.375 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:19:10.633 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:19:10.633 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:19:10.890 nvme0n1 00:19:10.890 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:19:10.890 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:19:11.148 nvme1n2 00:19:11.148 18:26:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:19:11.148 18:26:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:19:11.148 18:26:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:19:11.148 18:26:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:19:11.148 18:26:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:19:11.415 18:26:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:19:11.415 18:26:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:19:11.415 18:26:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:19:11.415 18:26:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:19:11.674 18:26:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ fbb01710-df97-452a-a920-3d89d3453854 == \f\b\b\0\1\7\1\0\-\d\f\9\7\-\4\5\2\a\-\a\9\2\0\-\3\d\8\9\d\3\4\5\3\8\5\4 ]] 00:19:11.674 18:26:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:19:11.674 18:26:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:19:11.674 18:26:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:19:12.239 18:26:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 5002875f-3def-41e7-9585-28e00caec000 == \5\0\0\2\8\7\5\f\-\3\d\e\f\-\4\1\e\7\-\9\5\8\5\-\2\8\e\0\0\c\a\e\c\0\0\0 ]] 00:19:12.240 18:26:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 83786 00:19:12.240 18:26:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 83786 ']' 00:19:12.240 18:26:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 83786 00:19:12.240 18:26:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:19:12.240 18:26:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:12.240 18:26:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 83786 00:19:12.240 killing process with pid 83786 00:19:12.240 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:12.240 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:12.240 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 83786' 00:19:12.240 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 83786 00:19:12.240 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 83786 00:19:14.784 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:14.784 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:19:14.784 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:19:14.784 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:14.784 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:19:14.784 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:14.784 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:19:14.784 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:14.784 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:14.784 rmmod nvme_tcp 00:19:15.042 rmmod nvme_fabrics 00:19:15.042 rmmod nvme_keyring 00:19:15.042 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:15.042 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:19:15.042 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:19:15.042 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 83408 ']' 00:19:15.042 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 83408 00:19:15.042 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 83408 ']' 00:19:15.042 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 83408 00:19:15.042 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:19:15.042 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:15.042 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 83408 00:19:15.042 killing process with pid 83408 00:19:15.042 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:15.042 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:15.042 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 83408' 00:19:15.042 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 83408 00:19:15.042 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 83408 00:19:16.458 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:16.458 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:16.458 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:16.458 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:16.458 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:16.458 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:16.458 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:16.458 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:16.458 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:16.718 00:19:16.718 real 0m22.478s 00:19:16.718 user 0m35.642s 00:19:16.718 sys 0m3.204s 00:19:16.718 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:16.718 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:19:16.718 ************************************ 00:19:16.718 END TEST nvmf_ns_masking 00:19:16.718 ************************************ 00:19:16.718 18:26:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:19:16.718 18:26:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 0 -eq 1 ]] 00:19:16.718 18:26:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:19:16.718 18:26:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:19:16.718 18:26:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:16.718 18:26:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:16.718 18:26:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:16.718 ************************************ 00:19:16.718 START TEST nvmf_vfio_user 00:19:16.718 ************************************ 00:19:16.718 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:19:16.718 * Looking for test storage... 00:19:16.718 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:16.718 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:16.718 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:19:16.718 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:16.718 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:16.718 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:16.718 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:16.718 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:16.718 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:16.718 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:16.718 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:16.718 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:16.718 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:16.718 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:19:16.718 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=0b8484e2-e129-4a11-8748-0b3c728771da 00:19:16.718 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:16.718 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:16.718 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:16.718 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:16.718 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:16.718 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:16.718 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:16.718 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:16.718 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:16.718 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:16.718 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:16.718 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:19:16.718 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:16.718 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:19:16.718 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:16.718 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:16.718 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:16.718 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:16.718 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:16.718 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:16.718 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:16.718 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:16.718 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:19:16.718 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:19:16.718 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:19:16.718 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:16.718 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:19:16.718 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:19:16.718 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:19:16.718 Process pid: 84076 00:19:16.718 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:16.718 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:19:16.718 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:19:16.718 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:19:16.718 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=84076 00:19:16.718 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:19:16.718 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 84076' 00:19:16.718 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:19:16.718 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 84076 00:19:16.718 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 84076 ']' 00:19:16.718 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:16.718 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:16.718 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:16.718 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:16.718 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:19:16.977 [2024-07-22 18:26:28.797679] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:19:16.977 [2024-07-22 18:26:28.798427] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:16.977 [2024-07-22 18:26:28.982272] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:17.543 [2024-07-22 18:26:29.281048] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:17.543 [2024-07-22 18:26:29.281343] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:17.543 [2024-07-22 18:26:29.281507] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:17.543 [2024-07-22 18:26:29.281575] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:17.543 [2024-07-22 18:26:29.281693] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:17.543 [2024-07-22 18:26:29.282020] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:17.543 [2024-07-22 18:26:29.282790] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:17.543 [2024-07-22 18:26:29.282975] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:17.543 [2024-07-22 18:26:29.282985] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:17.802 18:26:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:17.802 18:26:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:19:17.802 18:26:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:19:18.736 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:19:19.303 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:19:19.303 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:19:19.303 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:19:19.303 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:19:19.303 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:19:19.567 Malloc1 00:19:19.567 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:19:19.835 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:19:20.093 18:26:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:19:20.351 18:26:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:19:20.351 18:26:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:19:20.351 18:26:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:19:20.609 Malloc2 00:19:20.609 18:26:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:19:20.867 18:26:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:19:21.432 18:26:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:19:21.432 18:26:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:19:21.432 18:26:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:19:21.432 18:26:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:19:21.432 18:26:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:19:21.432 18:26:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:19:21.433 18:26:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:19:21.691 [2024-07-22 18:26:33.488927] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:19:21.691 [2024-07-22 18:26:33.489075] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84218 ] 00:19:21.691 [2024-07-22 18:26:33.667429] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:19:21.691 [2024-07-22 18:26:33.670741] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:19:21.691 [2024-07-22 18:26:33.670786] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7effb61c3000 00:19:21.691 [2024-07-22 18:26:33.671676] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:21.691 [2024-07-22 18:26:33.672676] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:21.691 [2024-07-22 18:26:33.673684] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:21.691 [2024-07-22 18:26:33.674704] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:19:21.691 [2024-07-22 18:26:33.677913] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:19:21.691 [2024-07-22 18:26:33.678697] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:21.691 [2024-07-22 18:26:33.679712] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:19:21.691 [2024-07-22 18:26:33.680735] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:21.691 [2024-07-22 18:26:33.681739] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:19:21.691 [2024-07-22 18:26:33.681777] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7effb61b8000 00:19:21.691 [2024-07-22 18:26:33.683383] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 9, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:19:21.691 [2024-07-22 18:26:33.703562] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:19:21.691 [2024-07-22 18:26:33.703675] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:19:21.691 [2024-07-22 18:26:33.705934] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:19:21.691 [2024-07-22 18:26:33.706105] nvme_pcie_common.c: 133:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:19:21.691 [2024-07-22 18:26:33.706816] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:19:21.691 [2024-07-22 18:26:33.706906] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:19:21.691 [2024-07-22 18:26:33.706926] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:19:21.951 [2024-07-22 18:26:33.707913] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:19:21.951 [2024-07-22 18:26:33.707965] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:19:21.951 [2024-07-22 18:26:33.707990] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:19:21.951 [2024-07-22 18:26:33.708893] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:19:21.951 [2024-07-22 18:26:33.708933] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:19:21.951 [2024-07-22 18:26:33.708954] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:19:21.951 [2024-07-22 18:26:33.709897] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:19:21.951 [2024-07-22 18:26:33.709926] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:19:21.951 [2024-07-22 18:26:33.710911] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:19:21.951 [2024-07-22 18:26:33.710948] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:19:21.951 [2024-07-22 18:26:33.710965] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:19:21.951 [2024-07-22 18:26:33.710986] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:19:21.951 [2024-07-22 18:26:33.711101] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:19:21.951 [2024-07-22 18:26:33.711118] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:19:21.951 [2024-07-22 18:26:33.711132] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:19:21.951 [2024-07-22 18:26:33.711934] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:19:21.951 [2024-07-22 18:26:33.712893] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:19:21.951 [2024-07-22 18:26:33.713906] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:19:21.951 [2024-07-22 18:26:33.714913] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:21.951 [2024-07-22 18:26:33.715083] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:19:21.951 [2024-07-22 18:26:33.715951] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:19:21.951 [2024-07-22 18:26:33.716004] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:19:21.951 [2024-07-22 18:26:33.716018] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:19:21.951 [2024-07-22 18:26:33.716053] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:19:21.951 [2024-07-22 18:26:33.716075] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:19:21.951 [2024-07-22 18:26:33.716111] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:19:21.951 [2024-07-22 18:26:33.716127] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:21.951 [2024-07-22 18:26:33.716141] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:21.951 [2024-07-22 18:26:33.716167] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:21.951 [2024-07-22 18:26:33.716255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:19:21.951 [2024-07-22 18:26:33.716285] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:19:21.951 [2024-07-22 18:26:33.716299] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:19:21.951 [2024-07-22 18:26:33.716309] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:19:21.951 [2024-07-22 18:26:33.716321] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:19:21.951 [2024-07-22 18:26:33.716334] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:19:21.951 [2024-07-22 18:26:33.716346] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:19:21.951 [2024-07-22 18:26:33.716356] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:19:21.951 [2024-07-22 18:26:33.716380] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:19:21.951 [2024-07-22 18:26:33.716400] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:19:21.951 [2024-07-22 18:26:33.716429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:19:21.951 [2024-07-22 18:26:33.716467] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:19:21.951 [2024-07-22 18:26:33.716492] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:19:21.951 [2024-07-22 18:26:33.716507] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:19:21.951 [2024-07-22 18:26:33.716525] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:19:21.951 [2024-07-22 18:26:33.716542] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:19:21.951 [2024-07-22 18:26:33.716561] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:19:21.951 [2024-07-22 18:26:33.716577] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:19:21.951 [2024-07-22 18:26:33.716601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:19:21.951 [2024-07-22 18:26:33.716613] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:19:21.951 [2024-07-22 18:26:33.716626] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:19:21.951 [2024-07-22 18:26:33.716639] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:19:21.951 [2024-07-22 18:26:33.716655] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:19:21.951 [2024-07-22 18:26:33.716671] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:19:21.951 [2024-07-22 18:26:33.716698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:19:21.951 [2024-07-22 18:26:33.716807] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:19:21.951 [2024-07-22 18:26:33.716885] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:19:21.951 [2024-07-22 18:26:33.716908] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:19:21.951 [2024-07-22 18:26:33.716925] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:19:21.951 [2024-07-22 18:26:33.716934] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:21.951 [2024-07-22 18:26:33.716950] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:19:21.951 [2024-07-22 18:26:33.716975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:19:21.951 [2024-07-22 18:26:33.717019] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:19:21.951 [2024-07-22 18:26:33.717042] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:19:21.951 [2024-07-22 18:26:33.717070] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:19:21.951 [2024-07-22 18:26:33.717088] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:19:21.951 [2024-07-22 18:26:33.717101] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:21.951 [2024-07-22 18:26:33.717114] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:21.951 [2024-07-22 18:26:33.717132] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:21.951 [2024-07-22 18:26:33.717166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:19:21.951 [2024-07-22 18:26:33.717223] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:19:21.951 [2024-07-22 18:26:33.717246] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:19:21.951 [2024-07-22 18:26:33.717267] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:19:21.951 [2024-07-22 18:26:33.717277] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:21.951 [2024-07-22 18:26:33.717287] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:21.951 [2024-07-22 18:26:33.717303] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:21.952 [2024-07-22 18:26:33.717324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:19:21.952 [2024-07-22 18:26:33.717373] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:19:21.952 [2024-07-22 18:26:33.717390] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:19:21.952 [2024-07-22 18:26:33.717405] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:19:21.952 [2024-07-22 18:26:33.717424] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:19:21.952 [2024-07-22 18:26:33.717437] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:19:21.952 [2024-07-22 18:26:33.717453] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:19:21.952 [2024-07-22 18:26:33.717463] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:19:21.952 [2024-07-22 18:26:33.717475] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:19:21.952 [2024-07-22 18:26:33.717485] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:19:21.952 [2024-07-22 18:26:33.717536] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:19:21.952 [2024-07-22 18:26:33.717554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:19:21.952 [2024-07-22 18:26:33.717578] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:19:21.952 [2024-07-22 18:26:33.717592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:19:21.952 [2024-07-22 18:26:33.717613] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:19:21.952 [2024-07-22 18:26:33.717625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:19:21.952 [2024-07-22 18:26:33.717649] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:19:21.952 [2024-07-22 18:26:33.717662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:19:21.952 [2024-07-22 18:26:33.717697] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:19:21.952 [2024-07-22 18:26:33.717708] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:19:21.952 [2024-07-22 18:26:33.717719] nvme_pcie_common.c:1239:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:19:21.952 [2024-07-22 18:26:33.717726] nvme_pcie_common.c:1255:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:19:21.952 [2024-07-22 18:26:33.717740] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:19:21.952 [2024-07-22 18:26:33.717752] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:19:21.952 [2024-07-22 18:26:33.717771] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:19:21.952 [2024-07-22 18:26:33.717781] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:19:21.952 [2024-07-22 18:26:33.717793] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:21.952 [2024-07-22 18:26:33.717804] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:19:21.952 [2024-07-22 18:26:33.717830] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:19:21.952 [2024-07-22 18:26:33.717840] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:21.952 [2024-07-22 18:26:33.717882] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:21.952 [2024-07-22 18:26:33.717914] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:21.952 [2024-07-22 18:26:33.717941] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:19:21.952 [2024-07-22 18:26:33.717951] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:19:21.952 [2024-07-22 18:26:33.717961] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:21.952 [2024-07-22 18:26:33.717976] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:19:21.952 [2024-07-22 18:26:33.717994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:19:21.952 [2024-07-22 18:26:33.718038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:19:21.952 [2024-07-22 18:26:33.718061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:19:21.952 [2024-07-22 18:26:33.718076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:19:21.952 ===================================================== 00:19:21.952 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:19:21.952 ===================================================== 00:19:21.952 Controller Capabilities/Features 00:19:21.952 ================================ 00:19:21.952 Vendor ID: 4e58 00:19:21.952 Subsystem Vendor ID: 4e58 00:19:21.952 Serial Number: SPDK1 00:19:21.952 Model Number: SPDK bdev Controller 00:19:21.952 Firmware Version: 24.09 00:19:21.952 Recommended Arb Burst: 6 00:19:21.952 IEEE OUI Identifier: 8d 6b 50 00:19:21.952 Multi-path I/O 00:19:21.952 May have multiple subsystem ports: Yes 00:19:21.952 May have multiple controllers: Yes 00:19:21.952 Associated with SR-IOV VF: No 00:19:21.952 Max Data Transfer Size: 131072 00:19:21.952 Max Number of Namespaces: 32 00:19:21.952 Max Number of I/O Queues: 127 00:19:21.952 NVMe Specification Version (VS): 1.3 00:19:21.952 NVMe Specification Version (Identify): 1.3 00:19:21.952 Maximum Queue Entries: 256 00:19:21.952 Contiguous Queues Required: Yes 00:19:21.952 Arbitration Mechanisms Supported 00:19:21.952 Weighted Round Robin: Not Supported 00:19:21.952 Vendor Specific: Not Supported 00:19:21.952 Reset Timeout: 15000 ms 00:19:21.952 Doorbell Stride: 4 bytes 00:19:21.952 NVM Subsystem Reset: Not Supported 00:19:21.952 Command Sets Supported 00:19:21.952 NVM Command Set: Supported 00:19:21.952 Boot Partition: Not Supported 00:19:21.952 Memory Page Size Minimum: 4096 bytes 00:19:21.952 Memory Page Size Maximum: 4096 bytes 00:19:21.952 Persistent Memory Region: Not Supported 00:19:21.952 Optional Asynchronous Events Supported 00:19:21.952 Namespace Attribute Notices: Supported 00:19:21.952 Firmware Activation Notices: Not Supported 00:19:21.952 ANA Change Notices: Not Supported 00:19:21.952 PLE Aggregate Log Change Notices: Not Supported 00:19:21.952 LBA Status Info Alert Notices: Not Supported 00:19:21.952 EGE Aggregate Log Change Notices: Not Supported 00:19:21.952 Normal NVM Subsystem Shutdown event: Not Supported 00:19:21.952 Zone Descriptor Change Notices: Not Supported 00:19:21.952 Discovery Log Change Notices: Not Supported 00:19:21.952 Controller Attributes 00:19:21.952 128-bit Host Identifier: Supported 00:19:21.952 Non-Operational Permissive Mode: Not Supported 00:19:21.952 NVM Sets: Not Supported 00:19:21.952 Read Recovery Levels: Not Supported 00:19:21.952 Endurance Groups: Not Supported 00:19:21.952 Predictable Latency Mode: Not Supported 00:19:21.952 Traffic Based Keep ALive: Not Supported 00:19:21.952 Namespace Granularity: Not Supported 00:19:21.952 SQ Associations: Not Supported 00:19:21.952 UUID List: Not Supported 00:19:21.952 Multi-Domain Subsystem: Not Supported 00:19:21.952 Fixed Capacity Management: Not Supported 00:19:21.952 Variable Capacity Management: Not Supported 00:19:21.952 Delete Endurance Group: Not Supported 00:19:21.952 Delete NVM Set: Not Supported 00:19:21.952 Extended LBA Formats Supported: Not Supported 00:19:21.952 Flexible Data Placement Supported: Not Supported 00:19:21.952 00:19:21.952 Controller Memory Buffer Support 00:19:21.952 ================================ 00:19:21.952 Supported: No 00:19:21.952 00:19:21.952 Persistent Memory Region Support 00:19:21.952 ================================ 00:19:21.952 Supported: No 00:19:21.952 00:19:21.952 Admin Command Set Attributes 00:19:21.952 ============================ 00:19:21.952 Security Send/Receive: Not Supported 00:19:21.952 Format NVM: Not Supported 00:19:21.952 Firmware Activate/Download: Not Supported 00:19:21.952 Namespace Management: Not Supported 00:19:21.952 Device Self-Test: Not Supported 00:19:21.952 Directives: Not Supported 00:19:21.952 NVMe-MI: Not Supported 00:19:21.952 Virtualization Management: Not Supported 00:19:21.952 Doorbell Buffer Config: Not Supported 00:19:21.952 Get LBA Status Capability: Not Supported 00:19:21.952 Command & Feature Lockdown Capability: Not Supported 00:19:21.952 Abort Command Limit: 4 00:19:21.952 Async Event Request Limit: 4 00:19:21.952 Number of Firmware Slots: N/A 00:19:21.952 Firmware Slot 1 Read-Only: N/A 00:19:21.952 Firmware Activation Without Reset: N/A 00:19:21.952 Multiple Update Detection Support: N/A 00:19:21.952 Firmware Update Granularity: No Information Provided 00:19:21.952 Per-Namespace SMART Log: No 00:19:21.952 Asymmetric Namespace Access Log Page: Not Supported 00:19:21.952 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:19:21.952 Command Effects Log Page: Supported 00:19:21.953 Get Log Page Extended Data: Supported 00:19:21.953 Telemetry Log Pages: Not Supported 00:19:21.953 Persistent Event Log Pages: Not Supported 00:19:21.953 Supported Log Pages Log Page: May Support 00:19:21.953 Commands Supported & Effects Log Page: Not Supported 00:19:21.953 Feature Identifiers & Effects Log Page:May Support 00:19:21.953 NVMe-MI Commands & Effects Log Page: May Support 00:19:21.953 Data Area 4 for Telemetry Log: Not Supported 00:19:21.953 Error Log Page Entries Supported: 128 00:19:21.953 Keep Alive: Supported 00:19:21.953 Keep Alive Granularity: 10000 ms 00:19:21.953 00:19:21.953 NVM Command Set Attributes 00:19:21.953 ========================== 00:19:21.953 Submission Queue Entry Size 00:19:21.953 Max: 64 00:19:21.953 Min: 64 00:19:21.953 Completion Queue Entry Size 00:19:21.953 Max: 16 00:19:21.953 Min: 16 00:19:21.953 Number of Namespaces: 32 00:19:21.953 Compare Command: Supported 00:19:21.953 Write Uncorrectable Command: Not Supported 00:19:21.953 Dataset Management Command: Supported 00:19:21.953 Write Zeroes Command: Supported 00:19:21.953 Set Features Save Field: Not Supported 00:19:21.953 Reservations: Not Supported 00:19:21.953 Timestamp: Not Supported 00:19:21.953 Copy: Supported 00:19:21.953 Volatile Write Cache: Present 00:19:21.953 Atomic Write Unit (Normal): 1 00:19:21.953 Atomic Write Unit (PFail): 1 00:19:21.953 Atomic Compare & Write Unit: 1 00:19:21.953 Fused Compare & Write: Supported 00:19:21.953 Scatter-Gather List 00:19:21.953 SGL Command Set: Supported (Dword aligned) 00:19:21.953 SGL Keyed: Not Supported 00:19:21.953 SGL Bit Bucket Descriptor: Not Supported 00:19:21.953 SGL Metadata Pointer: Not Supported 00:19:21.953 Oversized SGL: Not Supported 00:19:21.953 SGL Metadata Address: Not Supported 00:19:21.953 SGL Offset: Not Supported 00:19:21.953 Transport SGL Data Block: Not Supported 00:19:21.953 Replay Protected Memory Block: Not Supported 00:19:21.953 00:19:21.953 Firmware Slot Information 00:19:21.953 ========================= 00:19:21.953 Active slot: 1 00:19:21.953 Slot 1 Firmware Revision: 24.09 00:19:21.953 00:19:21.953 00:19:21.953 Commands Supported and Effects 00:19:21.953 ============================== 00:19:21.953 Admin Commands 00:19:21.953 -------------- 00:19:21.953 Get Log Page (02h): Supported 00:19:21.953 Identify (06h): Supported 00:19:21.953 Abort (08h): Supported 00:19:21.953 Set Features (09h): Supported 00:19:21.953 Get Features (0Ah): Supported 00:19:21.953 Asynchronous Event Request (0Ch): Supported 00:19:21.953 Keep Alive (18h): Supported 00:19:21.953 I/O Commands 00:19:21.953 ------------ 00:19:21.953 Flush (00h): Supported LBA-Change 00:19:21.953 Write (01h): Supported LBA-Change 00:19:21.953 Read (02h): Supported 00:19:21.953 Compare (05h): Supported 00:19:21.953 Write Zeroes (08h): Supported LBA-Change 00:19:21.953 Dataset Management (09h): Supported LBA-Change 00:19:21.953 Copy (19h): Supported LBA-Change 00:19:21.953 00:19:21.953 Error Log 00:19:21.953 ========= 00:19:21.953 00:19:21.953 Arbitration 00:19:21.953 =========== 00:19:21.953 Arbitration Burst: 1 00:19:21.953 00:19:21.953 Power Management 00:19:21.953 ================ 00:19:21.953 Number of Power States: 1 00:19:21.953 Current Power State: Power State #0 00:19:21.953 Power State #0: 00:19:21.953 Max Power: 0.00 W 00:19:21.953 Non-Operational State: Operational 00:19:21.953 Entry Latency: Not Reported 00:19:21.953 Exit Latency: Not Reported 00:19:21.953 Relative Read Throughput: 0 00:19:21.953 Relative Read Latency: 0 00:19:21.953 Relative Write Throughput: 0 00:19:21.953 Relative Write Latency: 0 00:19:21.953 Idle Power: Not Reported 00:19:21.953 Active Power: Not Reported 00:19:21.953 Non-Operational Permissive Mode: Not Supported 00:19:21.953 00:19:21.953 Health Information 00:19:21.953 ================== 00:19:21.953 Critical Warnings: 00:19:21.953 Available Spare Space: OK 00:19:21.953 Temperature: OK 00:19:21.953 Device Reliability: OK 00:19:21.953 Read Only: No 00:19:21.953 Volatile Memory Backup: OK 00:19:21.953 Current Temperature: 0 Kelvin (-273 Celsius) 00:19:21.953 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:19:21.953 Available Spare: 0% 00:19:21.953 Available Sp[2024-07-22 18:26:33.718324] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:19:21.953 [2024-07-22 18:26:33.718349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:19:21.953 [2024-07-22 18:26:33.718458] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:19:21.953 [2024-07-22 18:26:33.718492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.953 [2024-07-22 18:26:33.718509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.953 [2024-07-22 18:26:33.718522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.953 [2024-07-22 18:26:33.718556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.953 [2024-07-22 18:26:33.721917] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:19:21.953 [2024-07-22 18:26:33.721960] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:19:21.953 [2024-07-22 18:26:33.722988] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:21.953 [2024-07-22 18:26:33.723112] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:19:21.953 [2024-07-22 18:26:33.723136] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:19:21.953 [2024-07-22 18:26:33.723990] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:19:21.953 [2024-07-22 18:26:33.724041] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:19:21.953 [2024-07-22 18:26:33.724801] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:19:21.953 [2024-07-22 18:26:33.731894] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 9, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:19:21.953 are Threshold: 0% 00:19:21.953 Life Percentage Used: 0% 00:19:21.953 Data Units Read: 0 00:19:21.953 Data Units Written: 0 00:19:21.953 Host Read Commands: 0 00:19:21.953 Host Write Commands: 0 00:19:21.953 Controller Busy Time: 0 minutes 00:19:21.953 Power Cycles: 0 00:19:21.953 Power On Hours: 0 hours 00:19:21.953 Unsafe Shutdowns: 0 00:19:21.953 Unrecoverable Media Errors: 0 00:19:21.953 Lifetime Error Log Entries: 0 00:19:21.953 Warning Temperature Time: 0 minutes 00:19:21.953 Critical Temperature Time: 0 minutes 00:19:21.953 00:19:21.953 Number of Queues 00:19:21.953 ================ 00:19:21.953 Number of I/O Submission Queues: 127 00:19:21.953 Number of I/O Completion Queues: 127 00:19:21.953 00:19:21.953 Active Namespaces 00:19:21.953 ================= 00:19:21.953 Namespace ID:1 00:19:21.953 Error Recovery Timeout: Unlimited 00:19:21.953 Command Set Identifier: NVM (00h) 00:19:21.953 Deallocate: Supported 00:19:21.953 Deallocated/Unwritten Error: Not Supported 00:19:21.953 Deallocated Read Value: Unknown 00:19:21.953 Deallocate in Write Zeroes: Not Supported 00:19:21.953 Deallocated Guard Field: 0xFFFF 00:19:21.953 Flush: Supported 00:19:21.953 Reservation: Supported 00:19:21.953 Namespace Sharing Capabilities: Multiple Controllers 00:19:21.953 Size (in LBAs): 131072 (0GiB) 00:19:21.953 Capacity (in LBAs): 131072 (0GiB) 00:19:21.953 Utilization (in LBAs): 131072 (0GiB) 00:19:21.953 NGUID: F30C087608A84ACB8C0C9C7CA865FAE4 00:19:21.953 UUID: f30c0876-08a8-4acb-8c0c-9c7ca865fae4 00:19:21.953 Thin Provisioning: Not Supported 00:19:21.953 Per-NS Atomic Units: Yes 00:19:21.953 Atomic Boundary Size (Normal): 0 00:19:21.953 Atomic Boundary Size (PFail): 0 00:19:21.953 Atomic Boundary Offset: 0 00:19:21.953 Maximum Single Source Range Length: 65535 00:19:21.953 Maximum Copy Length: 65535 00:19:21.953 Maximum Source Range Count: 1 00:19:21.953 NGUID/EUI64 Never Reused: No 00:19:21.953 Namespace Write Protected: No 00:19:21.953 Number of LBA Formats: 1 00:19:21.953 Current LBA Format: LBA Format #00 00:19:21.953 LBA Format #00: Data Size: 512 Metadata Size: 0 00:19:21.953 00:19:21.953 18:26:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:19:22.213 [2024-07-22 18:26:34.204300] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:27.481 Initializing NVMe Controllers 00:19:27.481 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:19:27.481 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:19:27.481 Initialization complete. Launching workers. 00:19:27.481 ======================================================== 00:19:27.481 Latency(us) 00:19:27.481 Device Information : IOPS MiB/s Average min max 00:19:27.481 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 24886.64 97.21 5143.28 1411.80 12724.76 00:19:27.481 ======================================================== 00:19:27.481 Total : 24886.64 97.21 5143.28 1411.80 12724.76 00:19:27.481 00:19:27.481 [2024-07-22 18:26:39.231036] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:27.481 18:26:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:19:28.048 [2024-07-22 18:26:39.768650] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:33.307 Initializing NVMe Controllers 00:19:33.307 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:19:33.307 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:19:33.307 Initialization complete. Launching workers. 00:19:33.307 ======================================================== 00:19:33.307 Latency(us) 00:19:33.307 Device Information : IOPS MiB/s Average min max 00:19:33.307 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 14543.43 56.81 8806.49 5273.84 22739.46 00:19:33.307 ======================================================== 00:19:33.307 Total : 14543.43 56.81 8806.49 5273.84 22739.46 00:19:33.307 00:19:33.307 [2024-07-22 18:26:44.797807] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:33.307 18:26:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /home/vagrant/spdk_repo/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:19:33.307 [2024-07-22 18:26:45.234555] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:38.568 [2024-07-22 18:26:50.297955] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:38.568 Initializing NVMe Controllers 00:19:38.568 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:19:38.568 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:19:38.568 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:19:38.569 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:19:38.569 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:19:38.569 Initialization complete. Launching workers. 00:19:38.569 Starting thread on core 2 00:19:38.569 Starting thread on core 3 00:19:38.569 Starting thread on core 1 00:19:38.569 18:26:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:19:39.135 [2024-07-22 18:26:50.861786] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:42.421 [2024-07-22 18:26:54.038705] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:42.421 Initializing NVMe Controllers 00:19:42.421 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:19:42.421 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:19:42.421 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:19:42.421 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:19:42.421 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:19:42.421 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:19:42.421 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:19:42.422 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:19:42.422 Initialization complete. Launching workers. 00:19:42.422 Starting thread on core 1 with urgent priority queue 00:19:42.422 Starting thread on core 2 with urgent priority queue 00:19:42.422 Starting thread on core 3 with urgent priority queue 00:19:42.422 Starting thread on core 0 with urgent priority queue 00:19:42.422 SPDK bdev Controller (SPDK1 ) core 0: 469.33 IO/s 213.07 secs/100000 ios 00:19:42.422 SPDK bdev Controller (SPDK1 ) core 1: 512.00 IO/s 195.31 secs/100000 ios 00:19:42.422 SPDK bdev Controller (SPDK1 ) core 2: 832.00 IO/s 120.19 secs/100000 ios 00:19:42.422 SPDK bdev Controller (SPDK1 ) core 3: 768.00 IO/s 130.21 secs/100000 ios 00:19:42.422 ======================================================== 00:19:42.422 00:19:42.422 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:19:42.680 [2024-07-22 18:26:54.558054] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:42.680 Initializing NVMe Controllers 00:19:42.680 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:19:42.680 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:19:42.680 Namespace ID: 1 size: 0GB 00:19:42.680 Initialization complete. 00:19:42.680 INFO: using host memory buffer for IO 00:19:42.680 Hello world! 00:19:42.680 [2024-07-22 18:26:54.593233] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:42.939 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:19:43.197 [2024-07-22 18:26:55.051476] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:44.135 Initializing NVMe Controllers 00:19:44.135 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:19:44.135 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:19:44.135 Initialization complete. Launching workers. 00:19:44.135 submit (in ns) avg, min, max = 9802.6, 3767.3, 7001496.4 00:19:44.135 complete (in ns) avg, min, max = 37751.3, 2277.3, 7069748.2 00:19:44.135 00:19:44.135 Submit histogram 00:19:44.135 ================ 00:19:44.135 Range in us Cumulative Count 00:19:44.135 3.753 - 3.782: 0.0216% ( 2) 00:19:44.135 3.782 - 3.811: 0.0324% ( 1) 00:19:44.135 3.811 - 3.840: 0.0433% ( 1) 00:19:44.135 3.840 - 3.869: 0.1189% ( 7) 00:19:44.135 3.869 - 3.898: 0.2271% ( 10) 00:19:44.135 3.898 - 3.927: 0.3568% ( 12) 00:19:44.135 3.927 - 3.956: 0.6704% ( 29) 00:19:44.135 3.956 - 3.985: 1.4598% ( 73) 00:19:44.135 3.985 - 4.015: 2.0329% ( 53) 00:19:44.135 4.015 - 4.044: 2.7898% ( 70) 00:19:44.135 4.044 - 4.073: 4.5307% ( 161) 00:19:44.135 4.073 - 4.102: 7.3313% ( 259) 00:19:44.135 4.102 - 4.131: 9.4939% ( 200) 00:19:44.135 4.131 - 4.160: 11.2997% ( 167) 00:19:44.135 4.160 - 4.189: 13.7543% ( 227) 00:19:44.135 4.189 - 4.218: 16.8144% ( 283) 00:19:44.135 4.218 - 4.247: 19.4204% ( 241) 00:19:44.135 4.247 - 4.276: 21.6587% ( 207) 00:19:44.135 4.276 - 4.305: 25.7353% ( 377) 00:19:44.135 4.305 - 4.335: 30.4066% ( 432) 00:19:44.135 4.335 - 4.364: 34.2344% ( 354) 00:19:44.135 4.364 - 4.393: 38.2569% ( 372) 00:19:44.135 4.393 - 4.422: 45.8910% ( 706) 00:19:44.135 4.422 - 4.451: 54.6172% ( 807) 00:19:44.135 4.451 - 4.480: 61.8296% ( 667) 00:19:44.135 4.480 - 4.509: 67.4308% ( 518) 00:19:44.135 4.509 - 4.538: 72.3183% ( 452) 00:19:44.135 4.538 - 4.567: 75.9191% ( 333) 00:19:44.135 4.567 - 4.596: 78.6332% ( 251) 00:19:44.135 4.596 - 4.625: 80.6985% ( 191) 00:19:44.135 4.625 - 4.655: 82.7098% ( 186) 00:19:44.135 4.655 - 4.684: 84.7210% ( 186) 00:19:44.135 4.684 - 4.713: 86.0943% ( 127) 00:19:44.135 4.713 - 4.742: 87.4027% ( 121) 00:19:44.135 4.742 - 4.771: 88.6462% ( 115) 00:19:44.135 4.771 - 4.800: 89.7275% ( 100) 00:19:44.135 4.800 - 4.829: 90.7548% ( 95) 00:19:44.135 4.829 - 4.858: 91.4576% ( 65) 00:19:44.135 4.858 - 4.887: 92.1388% ( 63) 00:19:44.135 4.887 - 4.916: 92.6795% ( 50) 00:19:44.135 4.916 - 4.945: 93.1553% ( 44) 00:19:44.135 4.945 - 4.975: 93.4905% ( 31) 00:19:44.135 4.975 - 5.004: 93.6743% ( 17) 00:19:44.135 5.004 - 5.033: 93.8581% ( 17) 00:19:44.135 5.033 - 5.062: 94.0636% ( 19) 00:19:44.135 5.062 - 5.091: 94.1393% ( 7) 00:19:44.135 5.091 - 5.120: 94.1825% ( 4) 00:19:44.135 5.120 - 5.149: 94.2690% ( 8) 00:19:44.135 5.149 - 5.178: 94.2907% ( 2) 00:19:44.135 5.178 - 5.207: 94.3123% ( 2) 00:19:44.135 5.207 - 5.236: 94.3339% ( 2) 00:19:44.135 5.236 - 5.265: 94.3880% ( 5) 00:19:44.135 5.265 - 5.295: 94.3988% ( 1) 00:19:44.135 5.324 - 5.353: 94.4204% ( 2) 00:19:44.135 5.469 - 5.498: 94.4312% ( 1) 00:19:44.135 5.498 - 5.527: 94.4529% ( 2) 00:19:44.135 5.527 - 5.556: 94.4637% ( 1) 00:19:44.135 5.556 - 5.585: 94.4745% ( 1) 00:19:44.135 5.585 - 5.615: 94.4853% ( 1) 00:19:44.135 5.673 - 5.702: 94.4961% ( 1) 00:19:44.135 5.731 - 5.760: 94.5069% ( 1) 00:19:44.135 5.789 - 5.818: 94.5177% ( 1) 00:19:44.135 5.818 - 5.847: 94.5285% ( 1) 00:19:44.135 5.876 - 5.905: 94.5502% ( 2) 00:19:44.135 5.905 - 5.935: 94.5718% ( 2) 00:19:44.135 5.935 - 5.964: 94.5826% ( 1) 00:19:44.135 5.964 - 5.993: 94.6042% ( 2) 00:19:44.135 5.993 - 6.022: 94.6259% ( 2) 00:19:44.135 6.022 - 6.051: 94.6583% ( 3) 00:19:44.135 6.051 - 6.080: 94.6799% ( 2) 00:19:44.135 6.080 - 6.109: 94.6907% ( 1) 00:19:44.135 6.109 - 6.138: 94.7124% ( 2) 00:19:44.135 6.138 - 6.167: 94.7232% ( 1) 00:19:44.135 6.167 - 6.196: 94.7989% ( 7) 00:19:44.135 6.196 - 6.225: 94.8421% ( 4) 00:19:44.135 6.225 - 6.255: 94.8746% ( 3) 00:19:44.135 6.255 - 6.284: 94.9503% ( 7) 00:19:44.135 6.284 - 6.313: 95.0260% ( 7) 00:19:44.135 6.313 - 6.342: 95.0800% ( 5) 00:19:44.135 6.342 - 6.371: 95.1016% ( 2) 00:19:44.135 6.400 - 6.429: 95.1233% ( 2) 00:19:44.135 6.429 - 6.458: 95.1773% ( 5) 00:19:44.135 6.458 - 6.487: 95.2206% ( 4) 00:19:44.135 6.487 - 6.516: 95.2638% ( 4) 00:19:44.135 6.516 - 6.545: 95.3071% ( 4) 00:19:44.135 6.545 - 6.575: 95.3612% ( 5) 00:19:44.135 6.575 - 6.604: 95.4152% ( 5) 00:19:44.135 6.604 - 6.633: 95.4369% ( 2) 00:19:44.135 6.633 - 6.662: 95.4693% ( 3) 00:19:44.135 6.691 - 6.720: 95.4801% ( 1) 00:19:44.135 6.720 - 6.749: 95.5125% ( 3) 00:19:44.135 6.778 - 6.807: 95.5342% ( 2) 00:19:44.135 6.807 - 6.836: 95.5666% ( 3) 00:19:44.135 6.836 - 6.865: 95.5774% ( 1) 00:19:44.135 6.865 - 6.895: 95.5990% ( 2) 00:19:44.135 6.895 - 6.924: 95.6099% ( 1) 00:19:44.135 6.924 - 6.953: 95.6207% ( 1) 00:19:44.135 6.953 - 6.982: 95.6315% ( 1) 00:19:44.135 7.127 - 7.156: 95.6531% ( 2) 00:19:44.135 7.156 - 7.185: 95.6639% ( 1) 00:19:44.135 7.185 - 7.215: 95.6964% ( 3) 00:19:44.135 7.215 - 7.244: 95.7180% ( 2) 00:19:44.136 7.244 - 7.273: 95.7612% ( 4) 00:19:44.136 7.273 - 7.302: 95.7721% ( 1) 00:19:44.136 7.331 - 7.360: 95.8045% ( 3) 00:19:44.136 7.360 - 7.389: 95.8153% ( 1) 00:19:44.136 7.389 - 7.418: 95.8261% ( 1) 00:19:44.136 7.418 - 7.447: 95.8478% ( 2) 00:19:44.136 7.447 - 7.505: 95.8802% ( 3) 00:19:44.136 7.505 - 7.564: 95.9234% ( 4) 00:19:44.136 7.564 - 7.622: 95.9451% ( 2) 00:19:44.136 7.622 - 7.680: 95.9775% ( 3) 00:19:44.136 7.680 - 7.738: 96.0640% ( 8) 00:19:44.136 7.738 - 7.796: 96.0856% ( 2) 00:19:44.136 7.796 - 7.855: 96.1073% ( 2) 00:19:44.136 7.855 - 7.913: 96.1505% ( 4) 00:19:44.136 7.913 - 7.971: 96.2046% ( 5) 00:19:44.136 7.971 - 8.029: 96.2803% ( 7) 00:19:44.136 8.029 - 8.087: 96.3668% ( 8) 00:19:44.136 8.087 - 8.145: 96.4208% ( 5) 00:19:44.136 8.145 - 8.204: 96.5182% ( 9) 00:19:44.136 8.204 - 8.262: 96.5398% ( 2) 00:19:44.136 8.262 - 8.320: 96.5614% ( 2) 00:19:44.136 8.320 - 8.378: 96.5939% ( 3) 00:19:44.136 8.495 - 8.553: 96.6047% ( 1) 00:19:44.136 8.553 - 8.611: 96.6371% ( 3) 00:19:44.136 8.611 - 8.669: 96.6696% ( 3) 00:19:44.136 8.669 - 8.727: 96.6804% ( 1) 00:19:44.136 8.727 - 8.785: 96.7020% ( 2) 00:19:44.136 8.844 - 8.902: 96.7344% ( 3) 00:19:44.136 8.960 - 9.018: 96.7561% ( 2) 00:19:44.136 9.018 - 9.076: 96.7777% ( 2) 00:19:44.136 9.076 - 9.135: 96.8317% ( 5) 00:19:44.136 9.135 - 9.193: 96.8426% ( 1) 00:19:44.136 9.193 - 9.251: 96.8750% ( 3) 00:19:44.136 9.251 - 9.309: 96.9074% ( 3) 00:19:44.136 9.309 - 9.367: 96.9615% ( 5) 00:19:44.136 9.367 - 9.425: 96.9723% ( 1) 00:19:44.136 9.425 - 9.484: 96.9939% ( 2) 00:19:44.136 9.484 - 9.542: 97.0588% ( 6) 00:19:44.136 9.542 - 9.600: 97.0804% ( 2) 00:19:44.136 9.600 - 9.658: 97.0913% ( 1) 00:19:44.136 9.658 - 9.716: 97.1453% ( 5) 00:19:44.136 9.716 - 9.775: 97.1561% ( 1) 00:19:44.136 9.775 - 9.833: 97.1886% ( 3) 00:19:44.136 9.833 - 9.891: 97.1994% ( 1) 00:19:44.136 9.891 - 9.949: 97.2210% ( 2) 00:19:44.136 9.949 - 10.007: 97.2426% ( 2) 00:19:44.136 10.007 - 10.065: 97.2643% ( 2) 00:19:44.136 10.065 - 10.124: 97.2859% ( 2) 00:19:44.136 10.182 - 10.240: 97.3075% ( 2) 00:19:44.136 10.240 - 10.298: 97.3183% ( 1) 00:19:44.136 10.298 - 10.356: 97.3292% ( 1) 00:19:44.136 10.356 - 10.415: 97.4048% ( 7) 00:19:44.136 10.415 - 10.473: 97.4265% ( 2) 00:19:44.136 10.473 - 10.531: 97.4481% ( 2) 00:19:44.136 10.531 - 10.589: 97.4589% ( 1) 00:19:44.136 10.589 - 10.647: 97.4805% ( 2) 00:19:44.136 10.647 - 10.705: 97.4913% ( 1) 00:19:44.136 10.822 - 10.880: 97.5130% ( 2) 00:19:44.136 10.880 - 10.938: 97.5238% ( 1) 00:19:44.136 10.996 - 11.055: 97.5454% ( 2) 00:19:44.136 11.055 - 11.113: 97.5562% ( 1) 00:19:44.136 11.113 - 11.171: 97.5779% ( 2) 00:19:44.136 11.171 - 11.229: 97.5995% ( 2) 00:19:44.136 11.229 - 11.287: 97.6211% ( 2) 00:19:44.136 11.287 - 11.345: 97.6535% ( 3) 00:19:44.136 11.345 - 11.404: 97.6644% ( 1) 00:19:44.136 11.404 - 11.462: 97.6860% ( 2) 00:19:44.136 11.462 - 11.520: 97.6968% ( 1) 00:19:44.136 11.520 - 11.578: 97.7076% ( 1) 00:19:44.136 11.578 - 11.636: 97.7184% ( 1) 00:19:44.136 11.636 - 11.695: 97.7292% ( 1) 00:19:44.136 11.695 - 11.753: 97.7401% ( 1) 00:19:44.136 11.811 - 11.869: 97.7509% ( 1) 00:19:44.136 11.869 - 11.927: 97.7725% ( 2) 00:19:44.136 11.927 - 11.985: 97.8049% ( 3) 00:19:44.136 11.985 - 12.044: 97.8266% ( 2) 00:19:44.136 12.044 - 12.102: 97.8374% ( 1) 00:19:44.136 12.218 - 12.276: 97.8590% ( 2) 00:19:44.136 12.276 - 12.335: 97.8698% ( 1) 00:19:44.136 12.451 - 12.509: 97.8806% ( 1) 00:19:44.136 12.567 - 12.625: 97.8914% ( 1) 00:19:44.136 12.625 - 12.684: 97.9131% ( 2) 00:19:44.136 12.800 - 12.858: 97.9239% ( 1) 00:19:44.136 12.858 - 12.916: 97.9347% ( 1) 00:19:44.136 12.975 - 13.033: 97.9563% ( 2) 00:19:44.136 13.091 - 13.149: 97.9779% ( 2) 00:19:44.136 13.149 - 13.207: 97.9996% ( 2) 00:19:44.136 13.207 - 13.265: 98.0104% ( 1) 00:19:44.136 13.265 - 13.324: 98.0320% ( 2) 00:19:44.136 13.324 - 13.382: 98.0536% ( 2) 00:19:44.136 13.382 - 13.440: 98.0861% ( 3) 00:19:44.136 13.440 - 13.498: 98.0969% ( 1) 00:19:44.136 13.615 - 13.673: 98.1077% ( 1) 00:19:44.136 13.673 - 13.731: 98.1510% ( 4) 00:19:44.136 13.731 - 13.789: 98.1834% ( 3) 00:19:44.136 13.789 - 13.847: 98.2050% ( 2) 00:19:44.136 13.847 - 13.905: 98.2266% ( 2) 00:19:44.136 13.905 - 13.964: 98.2375% ( 1) 00:19:44.136 13.964 - 14.022: 98.2591% ( 2) 00:19:44.136 14.080 - 14.138: 98.2699% ( 1) 00:19:44.136 14.138 - 14.196: 98.2915% ( 2) 00:19:44.136 14.196 - 14.255: 98.3023% ( 1) 00:19:44.136 14.429 - 14.487: 98.3240% ( 2) 00:19:44.136 14.545 - 14.604: 98.3348% ( 1) 00:19:44.136 14.604 - 14.662: 98.3564% ( 2) 00:19:44.136 14.662 - 14.720: 98.3780% ( 2) 00:19:44.136 14.720 - 14.778: 98.3888% ( 1) 00:19:44.136 15.011 - 15.127: 98.4213% ( 3) 00:19:44.136 15.127 - 15.244: 98.4321% ( 1) 00:19:44.136 15.244 - 15.360: 98.4429% ( 1) 00:19:44.136 15.360 - 15.476: 98.4753% ( 3) 00:19:44.136 15.593 - 15.709: 98.5078% ( 3) 00:19:44.136 15.709 - 15.825: 98.5510% ( 4) 00:19:44.136 15.942 - 16.058: 98.5835% ( 3) 00:19:44.136 16.175 - 16.291: 98.6375% ( 5) 00:19:44.136 16.291 - 16.407: 98.6808% ( 4) 00:19:44.136 16.524 - 16.640: 98.7132% ( 3) 00:19:44.136 16.640 - 16.756: 98.7240% ( 1) 00:19:44.136 16.756 - 16.873: 98.7349% ( 1) 00:19:44.136 16.989 - 17.105: 98.7457% ( 1) 00:19:44.136 17.105 - 17.222: 98.7565% ( 1) 00:19:44.136 17.222 - 17.338: 98.7781% ( 2) 00:19:44.136 17.455 - 17.571: 98.7889% ( 1) 00:19:44.136 17.687 - 17.804: 98.7997% ( 1) 00:19:44.136 17.804 - 17.920: 98.8106% ( 1) 00:19:44.136 18.036 - 18.153: 98.8214% ( 1) 00:19:44.136 18.153 - 18.269: 98.8322% ( 1) 00:19:44.136 18.385 - 18.502: 98.8430% ( 1) 00:19:44.136 18.502 - 18.618: 98.8538% ( 1) 00:19:44.136 18.618 - 18.735: 98.8646% ( 1) 00:19:44.136 18.735 - 18.851: 98.8971% ( 3) 00:19:44.136 18.851 - 18.967: 99.0052% ( 10) 00:19:44.136 18.967 - 19.084: 99.0484% ( 4) 00:19:44.136 19.084 - 19.200: 99.1349% ( 8) 00:19:44.136 19.200 - 19.316: 99.1782% ( 4) 00:19:44.136 19.316 - 19.433: 99.2539% ( 7) 00:19:44.136 19.433 - 19.549: 99.3080% ( 5) 00:19:44.136 19.549 - 19.665: 99.3296% ( 2) 00:19:44.136 19.665 - 19.782: 99.3728% ( 4) 00:19:44.136 19.782 - 19.898: 99.3945% ( 2) 00:19:44.136 19.898 - 20.015: 99.4269% ( 3) 00:19:44.136 20.015 - 20.131: 99.4810% ( 5) 00:19:44.136 20.131 - 20.247: 99.5242% ( 4) 00:19:44.136 20.247 - 20.364: 99.5783% ( 5) 00:19:44.136 20.364 - 20.480: 99.6432% ( 6) 00:19:44.136 20.713 - 20.829: 99.6648% ( 2) 00:19:44.136 20.829 - 20.945: 99.6756% ( 1) 00:19:44.136 20.945 - 21.062: 99.6864% ( 1) 00:19:44.136 21.062 - 21.178: 99.7080% ( 2) 00:19:44.136 21.178 - 21.295: 99.7297% ( 2) 00:19:44.136 21.411 - 21.527: 99.7621% ( 3) 00:19:44.136 21.876 - 21.993: 99.7729% ( 1) 00:19:44.136 25.949 - 26.065: 99.7837% ( 1) 00:19:44.136 27.229 - 27.345: 99.7946% ( 1) 00:19:44.136 27.695 - 27.811: 99.8054% ( 1) 00:19:44.136 27.927 - 28.044: 99.8162% ( 1) 00:19:44.136 28.044 - 28.160: 99.8270% ( 1) 00:19:44.136 28.742 - 28.858: 99.8378% ( 1) 00:19:44.136 28.975 - 29.091: 99.8486% ( 1) 00:19:44.136 29.207 - 29.324: 99.8594% ( 1) 00:19:44.136 29.673 - 29.789: 99.8702% ( 1) 00:19:44.136 32.116 - 32.349: 99.8811% ( 1) 00:19:44.136 3068.276 - 3083.171: 99.8919% ( 1) 00:19:44.136 3872.582 - 3902.371: 99.9027% ( 1) 00:19:44.136 3961.949 - 3991.738: 99.9243% ( 2) 00:19:44.136 3991.738 - 4021.527: 99.9567% ( 3) 00:19:44.136 4021.527 - 4051.316: 99.9784% ( 2) 00:19:44.136 4081.105 - 4110.895: 99.9892% ( 1) 00:19:44.136 7000.436 - 7030.225: 100.0000% ( 1) 00:19:44.136 00:19:44.136 Complete histogram 00:19:44.136 ================== 00:19:44.136 Range in us Cumulative Count 00:19:44.136 2.269 - 2.284: 0.0324% ( 3) 00:19:44.136 2.284 - 2.298: 0.0649% ( 3) 00:19:44.136 2.298 - 2.313: 0.1081% ( 4) 00:19:44.136 2.313 - 2.327: 0.1406% ( 3) 00:19:44.136 2.327 - 2.342: 0.1730% ( 3) 00:19:44.136 2.342 - 2.356: 0.2703% ( 9) 00:19:44.136 2.356 - 2.371: 0.7353% ( 43) 00:19:44.136 2.371 - 2.385: 1.0813% ( 32) 00:19:44.136 2.385 - 2.400: 1.1029% ( 2) 00:19:44.136 2.400 - 2.415: 1.5030% ( 37) 00:19:44.136 2.415 - 2.429: 2.9196% ( 131) 00:19:44.136 2.429 - 2.444: 7.2772% ( 403) 00:19:44.136 2.444 - 2.458: 9.2236% ( 180) 00:19:44.136 2.458 - 2.473: 10.1644% ( 87) 00:19:44.137 2.473 - 2.487: 10.9862% ( 76) 00:19:44.137 2.487 - 2.502: 13.1055% ( 196) 00:19:44.137 2.502 - 2.516: 16.4576% ( 310) 00:19:44.137 2.516 - 2.531: 19.1285% ( 247) 00:19:44.137 2.531 - 2.545: 20.2206% ( 101) 00:19:44.137 2.545 - 2.560: 20.7937% ( 53) 00:19:44.137 2.560 - 2.575: 22.5887% ( 166) 00:19:44.137 2.575 - 2.589: 28.0385% ( 504) 00:19:44.137 2.589 - 2.604: 33.0774% ( 466) 00:19:44.137 2.604 - 2.618: 34.9913% ( 177) 00:19:44.137 2.618 - 2.633: 36.3430% ( 125) 00:19:44.137 2.633 - 2.647: 37.7703% ( 132) 00:19:44.137 2.647 - 2.662: 44.5177% ( 624) 00:19:44.137 2.662 - 2.676: 58.8452% ( 1325) 00:19:44.137 2.676 - 2.691: 69.9719% ( 1029) 00:19:44.137 2.691 - 2.705: 75.2271% ( 486) 00:19:44.137 2.705 - 2.720: 77.6708% ( 226) 00:19:44.137 2.720 - 2.735: 79.2063% ( 142) 00:19:44.137 2.735 - 2.749: 80.3093% ( 102) 00:19:44.137 2.749 - 2.764: 81.5960% ( 119) 00:19:44.137 2.764 - 2.778: 83.0558% ( 135) 00:19:44.137 2.778 - 2.793: 84.7535% ( 157) 00:19:44.137 2.793 - 2.807: 86.2781% ( 141) 00:19:44.137 2.807 - 2.822: 87.0891% ( 75) 00:19:44.137 2.822 - 2.836: 87.6406% ( 51) 00:19:44.137 2.836 - 2.851: 88.2461% ( 56) 00:19:44.137 2.851 - 2.865: 89.0355% ( 73) 00:19:44.137 2.865 - 2.880: 90.2790% ( 115) 00:19:44.137 2.880 - 2.895: 91.5549% ( 118) 00:19:44.137 2.895 - 2.909: 92.2686% ( 66) 00:19:44.137 2.909 - 2.924: 92.7984% ( 49) 00:19:44.137 2.924 - 2.938: 93.1445% ( 32) 00:19:44.137 2.938 - 2.953: 93.4580% ( 29) 00:19:44.137 2.953 - 2.967: 93.8689% ( 38) 00:19:44.137 2.967 - 2.982: 94.1285% ( 24) 00:19:44.137 2.982 - 2.996: 94.3447% ( 20) 00:19:44.137 2.996 - 3.011: 94.4961% ( 14) 00:19:44.137 3.011 - 3.025: 94.6691% ( 16) 00:19:44.137 3.025 - 3.040: 94.9286% ( 24) 00:19:44.137 3.040 - 3.055: 95.0692% ( 13) 00:19:44.137 3.055 - 3.069: 95.1881% ( 11) 00:19:44.137 3.069 - 3.084: 95.2638% ( 7) 00:19:44.137 3.084 - 3.098: 95.3395% ( 7) 00:19:44.137 3.098 - 3.113: 95.3503% ( 1) 00:19:44.137 3.113 - 3.127: 95.4044% ( 5) 00:19:44.137 3.127 - 3.142: 95.4477% ( 4) 00:19:44.137 3.142 - 3.156: 95.5234% ( 7) 00:19:44.137 3.156 - 3.171: 95.5342% ( 1) 00:19:44.137 3.171 - 3.185: 95.5666% ( 3) 00:19:44.137 3.185 - 3.200: 95.6207% ( 5) 00:19:44.137 3.200 - 3.215: 95.6639% ( 4) 00:19:44.137 3.215 - 3.229: 95.7072% ( 4) 00:19:44.137 3.229 - 3.244: 95.7504% ( 4) 00:19:44.137 3.244 - 3.258: 95.7829% ( 3) 00:19:44.137 3.258 - 3.273: 95.7937% ( 1) 00:19:44.137 3.273 - 3.287: 95.8369% ( 4) 00:19:44.137 3.287 - 3.302: 95.8478% ( 1) 00:19:44.137 3.331 - 3.345: 95.8586% ( 1) 00:19:44.137 3.360 - 3.375: 95.8694% ( 1) 00:19:44.137 3.418 - 3.433: 95.8802% ( 1) 00:19:44.137 3.447 - 3.462: 95.8910% ( 1) 00:19:44.137 3.462 - 3.476: 95.9126% ( 2) 00:19:44.137 3.869 - 3.898: 95.9343% ( 2) 00:19:44.137 3.898 - 3.927: 95.9451% ( 1) 00:19:44.137 4.160 - 4.189: 95.9559% ( 1) 00:19:44.137 4.305 - 4.335: 95.9775% ( 2) 00:19:44.137 4.480 - 4.509: 95.9883% ( 1) 00:19:44.137 4.538 - 4.567: 95.9991% ( 1) 00:19:44.137 4.567 - 4.596: 96.0099% ( 1) 00:19:44.137 4.684 - 4.713: 96.0208% ( 1) 00:19:44.137 4.771 - 4.800: 96.0316% ( 1) 00:19:44.137 4.800 - 4.829: 96.0424% ( 1) 00:19:44.137 4.916 - 4.945: 96.0640% ( 2) 00:19:44.137 4.945 - 4.975: 96.0748% ( 1) 00:19:44.137 4.975 - 5.004: 96.1073% ( 3) 00:19:44.137 5.062 - 5.091: 96.1289% ( 2) 00:19:44.137 5.091 - 5.120: 96.1505% ( 2) 00:19:44.137 5.120 - 5.149: 96.1613% ( 1) 00:19:44.137 5.149 - 5.178: 96.1721% ( 1) 00:19:44.137 5.178 - 5.207: 96.1830% ( 1) 00:19:44.137 5.265 - 5.295: 96.2046% ( 2) 00:19:44.137 5.295 - 5.324: 96.2154% ( 1) 00:19:44.137 5.585 - 5.615: 96.2262% ( 1) 00:19:44.137 5.935 - 5.964: 96.2370% ( 1) 00:19:44.137 6.138 - 6.167: 96.2478% ( 1) 00:19:44.137 6.167 - 6.196: 96.2587% ( 1) 00:19:44.137 6.225 - 6.255: 96.2695% ( 1) 00:19:44.137 6.255 - 6.284: 96.2803% ( 1) 00:19:44.137 6.429 - 6.458: 96.2911% ( 1) 00:19:44.137 6.516 - 6.545: 96.3127% ( 2) 00:19:44.137 6.575 - 6.604: 96.3343% ( 2) 00:19:44.137 6.604 - 6.633: 96.3452% ( 1) 00:19:44.137 6.691 - 6.720: 96.3560% ( 1) 00:19:44.137 6.749 - 6.778: 96.3668% ( 1) 00:19:44.137 6.836 - 6.865: 96.3776% ( 1) 00:19:44.137 6.953 - 6.982: 96.3884% ( 1) 00:19:44.137 7.098 - 7.127: 96.3992% ( 1) 00:19:44.137 7.215 - 7.244: 96.4100% ( 1) 00:19:44.137 7.273 - 7.302: 96.4208% ( 1) 00:19:44.137 7.302 - 7.331: 96.4317% ( 1) 00:19:44.137 7.331 - 7.360: 96.4425% ( 1) 00:19:44.137 7.360 - 7.389: 96.4533% ( 1) 00:19:44.137 7.389 - 7.418: 96.4641% ( 1) 00:19:44.137 7.971 - 8.029: 96.4749% ( 1) 00:19:44.137 8.029 - 8.087: 96.4857% ( 1) 00:19:44.137 8.495 - 8.553: 96.4965% ( 1) 00:19:44.137 8.669 - 8.727: 96.5074% ( 1) 00:19:44.137 8.727 - 8.785: 96.5290% ( 2) 00:19:44.137 8.785 - 8.844: 96.5398% ( 1) 00:19:44.137 8.960 - 9.018: 96.5506% ( 1) 00:19:44.137 9.135 - 9.193: 96.5614% ( 1) 00:19:44.137 9.251 - 9.309: 96.5722% ( 1) 00:19:44.137 9.484 - 9.542: 96.5830% ( 1) 00:19:44.137 9.600 - 9.658: 96.5939% ( 1) 00:19:44.137 9.716 - 9.775: 96.6047% ( 1) 00:19:44.137 9.833 - 9.891: 96.6155% ( 1) 00:19:44.137 9.891 - 9.949: 96.6263% ( 1) 00:19:44.137 10.007 - 10.065: 96.6371% ( 1) 00:19:44.137 10.065 - 10.124: 96.6587% ( 2) 00:19:44.137 10.124 - 10.182: 96.6804% ( 2) 00:19:44.137 10.415 - 10.473: 96.6912% ( 1) 00:19:44.137 10.473 - 10.531: 96.7020% ( 1) 00:19:44.137 10.531 - 10.589: 96.7128% ( 1) 00:19:44.137 10.589 - 10.647: 96.7236% ( 1) 00:19:44.137 10.764 - 10.822: 96.7561% ( 3) 00:19:44.137 10.822 - 10.880: 96.7669% ( 1) 00:19:44.137 10.880 - 10.938: 96.7885% ( 2) 00:19:44.137 10.996 - 11.055: 96.7993% ( 1) 00:19:44.137 11.113 - 11.171: 96.8101% ( 1) 00:19:44.137 11.229 - 11.287: 96.8209% ( 1) 00:19:44.137 11.345 - 11.404: 96.8426% ( 2) 00:19:44.137 11.404 - 11.462: 96.8534% ( 1) 00:19:44.137 11.520 - 11.578: 96.8750% ( 2) 00:19:44.137 11.811 - 11.869: 96.8858% ( 1) 00:19:44.137 11.985 - 12.044: 96.9074% ( 2) 00:19:44.137 12.044 - 12.102: 96.9183% ( 1) 00:19:44.137 12.160 - 12.218: 96.9291% ( 1) 00:19:44.137 12.393 - 12.451: 96.9399% ( 1) 00:19:44.137 12.567 - 12.625: 96.9615% ( 2) 00:19:44.137 12.625 - 12.684: 96.9723% ( 1) 00:19:44.137 12.800 - 12.858: 96.9831% ( [2024-07-22 18:26:56.075880] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:44.398 1) 00:19:44.398 12.975 - 13.033: 96.9939% ( 1) 00:19:44.398 13.498 - 13.556: 97.0156% ( 2) 00:19:44.398 13.847 - 13.905: 97.0372% ( 2) 00:19:44.398 13.964 - 14.022: 97.0480% ( 1) 00:19:44.398 14.138 - 14.196: 97.0588% ( 1) 00:19:44.398 14.196 - 14.255: 97.0696% ( 1) 00:19:44.398 14.371 - 14.429: 97.0804% ( 1) 00:19:44.398 14.429 - 14.487: 97.0913% ( 1) 00:19:44.398 14.720 - 14.778: 97.1021% ( 1) 00:19:44.398 15.011 - 15.127: 97.1129% ( 1) 00:19:44.398 15.942 - 16.058: 97.1237% ( 1) 00:19:44.398 16.175 - 16.291: 97.1345% ( 1) 00:19:44.398 16.407 - 16.524: 97.1453% ( 1) 00:19:44.398 16.524 - 16.640: 97.1561% ( 1) 00:19:44.398 16.640 - 16.756: 97.1886% ( 3) 00:19:44.398 16.756 - 16.873: 97.1994% ( 1) 00:19:44.398 16.873 - 16.989: 97.3183% ( 11) 00:19:44.398 16.989 - 17.105: 97.4913% ( 16) 00:19:44.398 17.105 - 17.222: 97.6644% ( 16) 00:19:44.398 17.222 - 17.338: 97.8374% ( 16) 00:19:44.398 17.338 - 17.455: 97.9888% ( 14) 00:19:44.398 17.455 - 17.571: 98.1077% ( 11) 00:19:44.398 17.571 - 17.687: 98.1401% ( 3) 00:19:44.398 17.687 - 17.804: 98.1834% ( 4) 00:19:44.398 17.804 - 17.920: 98.1942% ( 1) 00:19:44.398 17.920 - 18.036: 98.2483% ( 5) 00:19:44.398 18.036 - 18.153: 98.3023% ( 5) 00:19:44.398 18.153 - 18.269: 98.3348% ( 3) 00:19:44.398 18.269 - 18.385: 98.3888% ( 5) 00:19:44.398 18.385 - 18.502: 98.4213% ( 3) 00:19:44.398 18.502 - 18.618: 98.5186% ( 9) 00:19:44.398 18.618 - 18.735: 98.5943% ( 7) 00:19:44.398 18.735 - 18.851: 98.6484% ( 5) 00:19:44.398 18.851 - 18.967: 98.7349% ( 8) 00:19:44.398 18.967 - 19.084: 98.7781% ( 4) 00:19:44.398 19.084 - 19.200: 98.7997% ( 2) 00:19:44.398 19.200 - 19.316: 98.8106% ( 1) 00:19:44.398 19.316 - 19.433: 98.8646% ( 5) 00:19:44.398 19.433 - 19.549: 98.8862% ( 2) 00:19:44.398 19.665 - 19.782: 98.9079% ( 2) 00:19:44.398 19.782 - 19.898: 98.9403% ( 3) 00:19:44.398 19.898 - 20.015: 98.9511% ( 1) 00:19:44.398 20.015 - 20.131: 98.9728% ( 2) 00:19:44.398 20.247 - 20.364: 98.9836% ( 1) 00:19:44.398 20.364 - 20.480: 99.0052% ( 2) 00:19:44.398 20.596 - 20.713: 99.0160% ( 1) 00:19:44.398 20.945 - 21.062: 99.0268% ( 1) 00:19:44.398 22.807 - 22.924: 99.0376% ( 1) 00:19:44.398 24.204 - 24.320: 99.0484% ( 1) 00:19:44.398 24.553 - 24.669: 99.0593% ( 1) 00:19:44.398 25.018 - 25.135: 99.0701% ( 1) 00:19:44.398 25.367 - 25.484: 99.0809% ( 1) 00:19:44.398 25.833 - 25.949: 99.0917% ( 1) 00:19:44.398 26.764 - 26.880: 99.1025% ( 1) 00:19:44.398 26.880 - 26.996: 99.1133% ( 1) 00:19:44.398 1050.065 - 1057.513: 99.1241% ( 1) 00:19:44.398 3038.487 - 3053.382: 99.1782% ( 5) 00:19:44.398 3053.382 - 3068.276: 99.2106% ( 3) 00:19:44.398 3068.276 - 3083.171: 99.2323% ( 2) 00:19:44.398 3098.065 - 3112.960: 99.2431% ( 1) 00:19:44.398 3842.793 - 3872.582: 99.2539% ( 1) 00:19:44.398 3902.371 - 3932.160: 99.2755% ( 2) 00:19:44.398 3932.160 - 3961.949: 99.3080% ( 3) 00:19:44.398 3961.949 - 3991.738: 99.4485% ( 13) 00:19:44.398 3991.738 - 4021.527: 99.7513% ( 28) 00:19:44.398 4021.527 - 4051.316: 99.9135% ( 15) 00:19:44.398 4051.316 - 4081.105: 99.9567% ( 4) 00:19:44.398 4081.105 - 4110.895: 99.9784% ( 2) 00:19:44.398 6970.647 - 7000.436: 99.9892% ( 1) 00:19:44.398 7060.015 - 7089.804: 100.0000% ( 1) 00:19:44.398 00:19:44.398 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:19:44.398 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:19:44.398 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:19:44.398 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:19:44.398 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_get_subsystems 00:19:44.664 [ 00:19:44.664 { 00:19:44.664 "allow_any_host": true, 00:19:44.664 "hosts": [], 00:19:44.664 "listen_addresses": [], 00:19:44.664 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:44.664 "subtype": "Discovery" 00:19:44.664 }, 00:19:44.664 { 00:19:44.664 "allow_any_host": true, 00:19:44.664 "hosts": [], 00:19:44.664 "listen_addresses": [ 00:19:44.664 { 00:19:44.664 "adrfam": "IPv4", 00:19:44.664 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:19:44.665 "trsvcid": "0", 00:19:44.665 "trtype": "VFIOUSER" 00:19:44.665 } 00:19:44.665 ], 00:19:44.665 "max_cntlid": 65519, 00:19:44.665 "max_namespaces": 32, 00:19:44.665 "min_cntlid": 1, 00:19:44.665 "model_number": "SPDK bdev Controller", 00:19:44.665 "namespaces": [ 00:19:44.665 { 00:19:44.665 "bdev_name": "Malloc1", 00:19:44.665 "name": "Malloc1", 00:19:44.665 "nguid": "F30C087608A84ACB8C0C9C7CA865FAE4", 00:19:44.665 "nsid": 1, 00:19:44.665 "uuid": "f30c0876-08a8-4acb-8c0c-9c7ca865fae4" 00:19:44.665 } 00:19:44.665 ], 00:19:44.665 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:19:44.665 "serial_number": "SPDK1", 00:19:44.665 "subtype": "NVMe" 00:19:44.665 }, 00:19:44.665 { 00:19:44.665 "allow_any_host": true, 00:19:44.665 "hosts": [], 00:19:44.665 "listen_addresses": [ 00:19:44.665 { 00:19:44.665 "adrfam": "IPv4", 00:19:44.665 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:19:44.665 "trsvcid": "0", 00:19:44.665 "trtype": "VFIOUSER" 00:19:44.665 } 00:19:44.665 ], 00:19:44.665 "max_cntlid": 65519, 00:19:44.665 "max_namespaces": 32, 00:19:44.665 "min_cntlid": 1, 00:19:44.665 "model_number": "SPDK bdev Controller", 00:19:44.665 "namespaces": [ 00:19:44.665 { 00:19:44.665 "bdev_name": "Malloc2", 00:19:44.665 "name": "Malloc2", 00:19:44.665 "nguid": "0D43F84BE610420FAE0A84944499F12A", 00:19:44.665 "nsid": 1, 00:19:44.665 "uuid": "0d43f84b-e610-420f-ae0a-84944499f12a" 00:19:44.665 } 00:19:44.665 ], 00:19:44.665 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:19:44.665 "serial_number": "SPDK2", 00:19:44.665 "subtype": "NVMe" 00:19:44.665 } 00:19:44.665 ] 00:19:44.665 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:19:44.665 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=84482 00:19:44.665 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:19:44.665 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:19:44.665 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:19:44.665 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:44.665 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:19:44.665 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # i=1 00:19:44.665 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # sleep 0.1 00:19:44.665 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:44.665 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:19:44.665 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # i=2 00:19:44.665 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # sleep 0.1 00:19:44.924 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:44.924 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # '[' 2 -lt 200 ']' 00:19:44.924 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # i=3 00:19:44.924 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # sleep 0.1 00:19:44.924 [2024-07-22 18:26:56.824908] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:44.924 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:44.924 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:44.924 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:19:44.924 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:19:44.924 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:19:45.491 Malloc3 00:19:45.491 18:26:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:19:45.749 [2024-07-22 18:26:57.601827] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:45.749 18:26:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_get_subsystems 00:19:45.749 Asynchronous Event Request test 00:19:45.749 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:19:45.749 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:19:45.749 Registering asynchronous event callbacks... 00:19:45.749 Starting namespace attribute notice tests for all controllers... 00:19:45.749 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:19:45.749 aer_cb - Changed Namespace 00:19:45.749 Cleaning up... 00:19:46.008 [ 00:19:46.008 { 00:19:46.008 "allow_any_host": true, 00:19:46.008 "hosts": [], 00:19:46.008 "listen_addresses": [], 00:19:46.008 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:46.008 "subtype": "Discovery" 00:19:46.008 }, 00:19:46.008 { 00:19:46.008 "allow_any_host": true, 00:19:46.008 "hosts": [], 00:19:46.008 "listen_addresses": [ 00:19:46.008 { 00:19:46.008 "adrfam": "IPv4", 00:19:46.008 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:19:46.008 "trsvcid": "0", 00:19:46.008 "trtype": "VFIOUSER" 00:19:46.008 } 00:19:46.008 ], 00:19:46.008 "max_cntlid": 65519, 00:19:46.008 "max_namespaces": 32, 00:19:46.008 "min_cntlid": 1, 00:19:46.008 "model_number": "SPDK bdev Controller", 00:19:46.008 "namespaces": [ 00:19:46.008 { 00:19:46.008 "bdev_name": "Malloc1", 00:19:46.008 "name": "Malloc1", 00:19:46.008 "nguid": "F30C087608A84ACB8C0C9C7CA865FAE4", 00:19:46.008 "nsid": 1, 00:19:46.008 "uuid": "f30c0876-08a8-4acb-8c0c-9c7ca865fae4" 00:19:46.008 }, 00:19:46.008 { 00:19:46.008 "bdev_name": "Malloc3", 00:19:46.008 "name": "Malloc3", 00:19:46.008 "nguid": "95609156E19A464F8BD41A841EB62150", 00:19:46.008 "nsid": 2, 00:19:46.008 "uuid": "95609156-e19a-464f-8bd4-1a841eb62150" 00:19:46.008 } 00:19:46.008 ], 00:19:46.008 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:19:46.008 "serial_number": "SPDK1", 00:19:46.008 "subtype": "NVMe" 00:19:46.008 }, 00:19:46.008 { 00:19:46.008 "allow_any_host": true, 00:19:46.008 "hosts": [], 00:19:46.008 "listen_addresses": [ 00:19:46.008 { 00:19:46.008 "adrfam": "IPv4", 00:19:46.008 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:19:46.008 "trsvcid": "0", 00:19:46.008 "trtype": "VFIOUSER" 00:19:46.008 } 00:19:46.008 ], 00:19:46.008 "max_cntlid": 65519, 00:19:46.008 "max_namespaces": 32, 00:19:46.008 "min_cntlid": 1, 00:19:46.008 "model_number": "SPDK bdev Controller", 00:19:46.008 "namespaces": [ 00:19:46.008 { 00:19:46.008 "bdev_name": "Malloc2", 00:19:46.008 "name": "Malloc2", 00:19:46.008 "nguid": "0D43F84BE610420FAE0A84944499F12A", 00:19:46.008 "nsid": 1, 00:19:46.008 "uuid": "0d43f84b-e610-420f-ae0a-84944499f12a" 00:19:46.008 } 00:19:46.008 ], 00:19:46.008 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:19:46.008 "serial_number": "SPDK2", 00:19:46.008 "subtype": "NVMe" 00:19:46.008 } 00:19:46.008 ] 00:19:46.008 18:26:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 84482 00:19:46.008 18:26:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:19:46.008 18:26:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:19:46.008 18:26:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:19:46.008 18:26:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:19:46.009 [2024-07-22 18:26:57.987353] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:19:46.009 [2024-07-22 18:26:57.987475] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84527 ] 00:19:46.269 [2024-07-22 18:26:58.153649] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:19:46.269 [2024-07-22 18:26:58.165563] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:19:46.269 [2024-07-22 18:26:58.165637] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f61f619b000 00:19:46.269 [2024-07-22 18:26:58.166531] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:46.269 [2024-07-22 18:26:58.167515] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:46.269 [2024-07-22 18:26:58.168540] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:46.269 [2024-07-22 18:26:58.169532] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:19:46.269 [2024-07-22 18:26:58.170568] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:19:46.269 [2024-07-22 18:26:58.171581] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:46.269 [2024-07-22 18:26:58.172565] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:19:46.269 [2024-07-22 18:26:58.173559] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:46.269 [2024-07-22 18:26:58.174593] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:19:46.269 [2024-07-22 18:26:58.174639] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f61f6190000 00:19:46.269 [2024-07-22 18:26:58.176238] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 9, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:19:46.269 [2024-07-22 18:26:58.192779] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:19:46.269 [2024-07-22 18:26:58.192903] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:19:46.269 [2024-07-22 18:26:58.198111] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:19:46.269 [2024-07-22 18:26:58.198276] nvme_pcie_common.c: 133:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:19:46.269 [2024-07-22 18:26:58.199020] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:19:46.269 [2024-07-22 18:26:58.199075] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:19:46.269 [2024-07-22 18:26:58.199089] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:19:46.269 [2024-07-22 18:26:58.199199] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:19:46.269 [2024-07-22 18:26:58.199228] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:19:46.269 [2024-07-22 18:26:58.199251] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:19:46.269 [2024-07-22 18:26:58.200203] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:19:46.269 [2024-07-22 18:26:58.200250] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:19:46.269 [2024-07-22 18:26:58.200272] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:19:46.269 [2024-07-22 18:26:58.201204] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:19:46.269 [2024-07-22 18:26:58.201243] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:19:46.269 [2024-07-22 18:26:58.202206] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:19:46.269 [2024-07-22 18:26:58.202247] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:19:46.269 [2024-07-22 18:26:58.202264] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:19:46.269 [2024-07-22 18:26:58.202284] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:19:46.269 [2024-07-22 18:26:58.202402] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:19:46.269 [2024-07-22 18:26:58.202415] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:19:46.269 [2024-07-22 18:26:58.202429] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:19:46.269 [2024-07-22 18:26:58.203204] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:19:46.269 [2024-07-22 18:26:58.204213] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:19:46.269 [2024-07-22 18:26:58.205233] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:19:46.269 [2024-07-22 18:26:58.206230] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:46.269 [2024-07-22 18:26:58.206386] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:19:46.269 [2024-07-22 18:26:58.209867] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:19:46.269 [2024-07-22 18:26:58.209910] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:19:46.269 [2024-07-22 18:26:58.209924] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:19:46.269 [2024-07-22 18:26:58.209959] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:19:46.269 [2024-07-22 18:26:58.209981] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:19:46.269 [2024-07-22 18:26:58.210033] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:19:46.269 [2024-07-22 18:26:58.210049] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:46.269 [2024-07-22 18:26:58.210059] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:46.269 [2024-07-22 18:26:58.210087] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:46.269 [2024-07-22 18:26:58.216868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:19:46.269 [2024-07-22 18:26:58.216919] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:19:46.269 [2024-07-22 18:26:58.216935] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:19:46.269 [2024-07-22 18:26:58.216945] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:19:46.269 [2024-07-22 18:26:58.216957] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:19:46.269 [2024-07-22 18:26:58.216970] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:19:46.269 [2024-07-22 18:26:58.216982] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:19:46.269 [2024-07-22 18:26:58.216993] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:19:46.269 [2024-07-22 18:26:58.217016] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:19:46.269 [2024-07-22 18:26:58.217039] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:19:46.269 [2024-07-22 18:26:58.227945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:19:46.269 [2024-07-22 18:26:58.228029] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:19:46.269 [2024-07-22 18:26:58.228061] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:19:46.269 [2024-07-22 18:26:58.228075] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:19:46.269 [2024-07-22 18:26:58.228092] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:19:46.269 [2024-07-22 18:26:58.228103] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:19:46.269 [2024-07-22 18:26:58.228125] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:19:46.269 [2024-07-22 18:26:58.228143] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:19:46.269 [2024-07-22 18:26:58.237966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:19:46.269 [2024-07-22 18:26:58.238024] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:19:46.270 [2024-07-22 18:26:58.238045] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:19:46.270 [2024-07-22 18:26:58.238061] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:19:46.270 [2024-07-22 18:26:58.238079] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:19:46.270 [2024-07-22 18:26:58.238100] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:19:46.270 [2024-07-22 18:26:58.245867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:19:46.270 [2024-07-22 18:26:58.246004] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:19:46.270 [2024-07-22 18:26:58.246057] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:19:46.270 [2024-07-22 18:26:58.246081] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:19:46.270 [2024-07-22 18:26:58.246099] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:19:46.270 [2024-07-22 18:26:58.246107] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:46.270 [2024-07-22 18:26:58.246125] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:19:46.270 [2024-07-22 18:26:58.252898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:19:46.270 [2024-07-22 18:26:58.252976] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:19:46.270 [2024-07-22 18:26:58.253003] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:19:46.270 [2024-07-22 18:26:58.253034] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:19:46.270 [2024-07-22 18:26:58.253054] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:19:46.270 [2024-07-22 18:26:58.253068] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:46.270 [2024-07-22 18:26:58.253079] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:46.270 [2024-07-22 18:26:58.253099] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:46.270 [2024-07-22 18:26:58.263902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:19:46.270 [2024-07-22 18:26:58.264008] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:19:46.270 [2024-07-22 18:26:58.264036] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:19:46.270 [2024-07-22 18:26:58.264063] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:19:46.270 [2024-07-22 18:26:58.264074] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:46.270 [2024-07-22 18:26:58.264084] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:46.270 [2024-07-22 18:26:58.264102] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:46.270 [2024-07-22 18:26:58.271954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:19:46.270 [2024-07-22 18:26:58.272059] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:19:46.270 [2024-07-22 18:26:58.272081] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:19:46.270 [2024-07-22 18:26:58.272098] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:19:46.270 [2024-07-22 18:26:58.272117] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:19:46.270 [2024-07-22 18:26:58.272130] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:19:46.270 [2024-07-22 18:26:58.272143] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:19:46.270 [2024-07-22 18:26:58.272154] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:19:46.270 [2024-07-22 18:26:58.272166] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:19:46.270 [2024-07-22 18:26:58.272176] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:19:46.270 [2024-07-22 18:26:58.272235] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:19:46.270 [2024-07-22 18:26:58.279883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:19:46.270 [2024-07-22 18:26:58.279941] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:19:46.530 [2024-07-22 18:26:58.287914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:19:46.530 [2024-07-22 18:26:58.287988] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:19:46.530 [2024-07-22 18:26:58.295881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:19:46.530 [2024-07-22 18:26:58.295962] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:19:46.530 [2024-07-22 18:26:58.303880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:19:46.530 [2024-07-22 18:26:58.303976] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:19:46.530 [2024-07-22 18:26:58.303992] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:19:46.530 [2024-07-22 18:26:58.304003] nvme_pcie_common.c:1239:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:19:46.530 [2024-07-22 18:26:58.304011] nvme_pcie_common.c:1255:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:19:46.530 [2024-07-22 18:26:58.304026] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:19:46.530 [2024-07-22 18:26:58.304042] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:19:46.530 [2024-07-22 18:26:58.304062] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:19:46.530 [2024-07-22 18:26:58.304083] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:19:46.530 [2024-07-22 18:26:58.304101] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:46.530 [2024-07-22 18:26:58.304121] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:19:46.530 [2024-07-22 18:26:58.304144] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:19:46.530 [2024-07-22 18:26:58.304153] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:46.530 [2024-07-22 18:26:58.304163] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:46.530 [2024-07-22 18:26:58.304175] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:46.530 [2024-07-22 18:26:58.304198] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:19:46.531 [2024-07-22 18:26:58.304220] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:19:46.531 [2024-07-22 18:26:58.304230] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:46.531 [2024-07-22 18:26:58.304245] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:19:46.531 [2024-07-22 18:26:58.311906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:19:46.531 [2024-07-22 18:26:58.311983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:19:46.531 [2024-07-22 18:26:58.312006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:19:46.531 [2024-07-22 18:26:58.312020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:19:46.531 ===================================================== 00:19:46.531 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:19:46.531 ===================================================== 00:19:46.531 Controller Capabilities/Features 00:19:46.531 ================================ 00:19:46.531 Vendor ID: 4e58 00:19:46.531 Subsystem Vendor ID: 4e58 00:19:46.531 Serial Number: SPDK2 00:19:46.531 Model Number: SPDK bdev Controller 00:19:46.531 Firmware Version: 24.09 00:19:46.531 Recommended Arb Burst: 6 00:19:46.531 IEEE OUI Identifier: 8d 6b 50 00:19:46.531 Multi-path I/O 00:19:46.531 May have multiple subsystem ports: Yes 00:19:46.531 May have multiple controllers: Yes 00:19:46.531 Associated with SR-IOV VF: No 00:19:46.531 Max Data Transfer Size: 131072 00:19:46.531 Max Number of Namespaces: 32 00:19:46.531 Max Number of I/O Queues: 127 00:19:46.531 NVMe Specification Version (VS): 1.3 00:19:46.531 NVMe Specification Version (Identify): 1.3 00:19:46.531 Maximum Queue Entries: 256 00:19:46.531 Contiguous Queues Required: Yes 00:19:46.531 Arbitration Mechanisms Supported 00:19:46.531 Weighted Round Robin: Not Supported 00:19:46.531 Vendor Specific: Not Supported 00:19:46.531 Reset Timeout: 15000 ms 00:19:46.531 Doorbell Stride: 4 bytes 00:19:46.531 NVM Subsystem Reset: Not Supported 00:19:46.531 Command Sets Supported 00:19:46.531 NVM Command Set: Supported 00:19:46.531 Boot Partition: Not Supported 00:19:46.531 Memory Page Size Minimum: 4096 bytes 00:19:46.531 Memory Page Size Maximum: 4096 bytes 00:19:46.531 Persistent Memory Region: Not Supported 00:19:46.531 Optional Asynchronous Events Supported 00:19:46.531 Namespace Attribute Notices: Supported 00:19:46.531 Firmware Activation Notices: Not Supported 00:19:46.531 ANA Change Notices: Not Supported 00:19:46.531 PLE Aggregate Log Change Notices: Not Supported 00:19:46.531 LBA Status Info Alert Notices: Not Supported 00:19:46.531 EGE Aggregate Log Change Notices: Not Supported 00:19:46.531 Normal NVM Subsystem Shutdown event: Not Supported 00:19:46.531 Zone Descriptor Change Notices: Not Supported 00:19:46.531 Discovery Log Change Notices: Not Supported 00:19:46.531 Controller Attributes 00:19:46.531 128-bit Host Identifier: Supported 00:19:46.531 Non-Operational Permissive Mode: Not Supported 00:19:46.531 NVM Sets: Not Supported 00:19:46.531 Read Recovery Levels: Not Supported 00:19:46.531 Endurance Groups: Not Supported 00:19:46.531 Predictable Latency Mode: Not Supported 00:19:46.531 Traffic Based Keep ALive: Not Supported 00:19:46.531 Namespace Granularity: Not Supported 00:19:46.531 SQ Associations: Not Supported 00:19:46.531 UUID List: Not Supported 00:19:46.531 Multi-Domain Subsystem: Not Supported 00:19:46.531 Fixed Capacity Management: Not Supported 00:19:46.531 Variable Capacity Management: Not Supported 00:19:46.531 Delete Endurance Group: Not Supported 00:19:46.531 Delete NVM Set: Not Supported 00:19:46.531 Extended LBA Formats Supported: Not Supported 00:19:46.531 Flexible Data Placement Supported: Not Supported 00:19:46.531 00:19:46.531 Controller Memory Buffer Support 00:19:46.531 ================================ 00:19:46.531 Supported: No 00:19:46.531 00:19:46.531 Persistent Memory Region Support 00:19:46.531 ================================ 00:19:46.531 Supported: No 00:19:46.531 00:19:46.531 Admin Command Set Attributes 00:19:46.531 ============================ 00:19:46.531 Security Send/Receive: Not Supported 00:19:46.531 Format NVM: Not Supported 00:19:46.531 Firmware Activate/Download: Not Supported 00:19:46.531 Namespace Management: Not Supported 00:19:46.531 Device Self-Test: Not Supported 00:19:46.531 Directives: Not Supported 00:19:46.531 NVMe-MI: Not Supported 00:19:46.531 Virtualization Management: Not Supported 00:19:46.531 Doorbell Buffer Config: Not Supported 00:19:46.531 Get LBA Status Capability: Not Supported 00:19:46.531 Command & Feature Lockdown Capability: Not Supported 00:19:46.531 Abort Command Limit: 4 00:19:46.531 Async Event Request Limit: 4 00:19:46.531 Number of Firmware Slots: N/A 00:19:46.531 Firmware Slot 1 Read-Only: N/A 00:19:46.531 Firmware Activation Without Reset: N/A 00:19:46.531 Multiple Update Detection Support: N/A 00:19:46.531 Firmware Update Granularity: No Information Provided 00:19:46.531 Per-Namespace SMART Log: No 00:19:46.531 Asymmetric Namespace Access Log Page: Not Supported 00:19:46.531 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:19:46.531 Command Effects Log Page: Supported 00:19:46.531 Get Log Page Extended Data: Supported 00:19:46.531 Telemetry Log Pages: Not Supported 00:19:46.531 Persistent Event Log Pages: Not Supported 00:19:46.531 Supported Log Pages Log Page: May Support 00:19:46.531 Commands Supported & Effects Log Page: Not Supported 00:19:46.532 Feature Identifiers & Effects Log Page:May Support 00:19:46.532 NVMe-MI Commands & Effects Log Page: May Support 00:19:46.532 Data Area 4 for Telemetry Log: Not Supported 00:19:46.532 Error Log Page Entries Supported: 128 00:19:46.532 Keep Alive: Supported 00:19:46.532 Keep Alive Granularity: 10000 ms 00:19:46.532 00:19:46.532 NVM Command Set Attributes 00:19:46.532 ========================== 00:19:46.532 Submission Queue Entry Size 00:19:46.532 Max: 64 00:19:46.532 Min: 64 00:19:46.532 Completion Queue Entry Size 00:19:46.532 Max: 16 00:19:46.532 Min: 16 00:19:46.532 Number of Namespaces: 32 00:19:46.532 Compare Command: Supported 00:19:46.532 Write Uncorrectable Command: Not Supported 00:19:46.532 Dataset Management Command: Supported 00:19:46.532 Write Zeroes Command: Supported 00:19:46.532 Set Features Save Field: Not Supported 00:19:46.532 Reservations: Not Supported 00:19:46.532 Timestamp: Not Supported 00:19:46.532 Copy: Supported 00:19:46.532 Volatile Write Cache: Present 00:19:46.532 Atomic Write Unit (Normal): 1 00:19:46.532 Atomic Write Unit (PFail): 1 00:19:46.532 Atomic Compare & Write Unit: 1 00:19:46.532 Fused Compare & Write: Supported 00:19:46.532 Scatter-Gather List 00:19:46.532 SGL Command Set: Supported (Dword aligned) 00:19:46.532 SGL Keyed: Not Supported 00:19:46.532 SGL Bit Bucket Descriptor: Not Supported 00:19:46.532 SGL Metadata Pointer: Not Supported 00:19:46.532 Oversized SGL: Not Supported 00:19:46.532 SGL Metadata Address: Not Supported 00:19:46.532 SGL Offset: Not Supported 00:19:46.532 Transport SGL Data Block: Not Supported 00:19:46.532 Replay Protected Memory Block: Not Supported 00:19:46.532 00:19:46.532 Firmware Slot Information 00:19:46.532 ========================= 00:19:46.532 Active slot: 1 00:19:46.532 Slot 1 Firmware Revision: 24.09 00:19:46.532 00:19:46.532 00:19:46.532 Commands Supported and Effects 00:19:46.532 ============================== 00:19:46.532 Admin Commands 00:19:46.532 -------------- 00:19:46.532 Get Log Page (02h): Supported 00:19:46.532 Identify (06h): Supported 00:19:46.532 Abort (08h): Supported 00:19:46.532 Set Features (09h): Supported 00:19:46.532 Get Features (0Ah): Supported 00:19:46.532 Asynchronous Event Request (0Ch): Supported 00:19:46.532 Keep Alive (18h): Supported 00:19:46.532 I/O Commands 00:19:46.532 ------------ 00:19:46.532 Flush (00h): Supported LBA-Change 00:19:46.532 Write (01h): Supported LBA-Change 00:19:46.532 Read (02h): Supported 00:19:46.532 Compare (05h): Supported 00:19:46.532 Write Zeroes (08h): Supported LBA-Change 00:19:46.532 Dataset Management (09h): Supported LBA-Change 00:19:46.532 Copy (19h): Supported LBA-Change 00:19:46.532 00:19:46.532 Error Log 00:19:46.532 ========= 00:19:46.532 00:19:46.532 Arbitration 00:19:46.532 =========== 00:19:46.532 Arbitration Burst: 1 00:19:46.532 00:19:46.532 Power Management 00:19:46.532 ================ 00:19:46.532 Number of Power States: 1 00:19:46.532 Current Power State: Power State #0 00:19:46.532 Power State #0: 00:19:46.532 Max Power: 0.00 W 00:19:46.532 Non-Operational State: Operational 00:19:46.532 Entry Latency: Not Reported 00:19:46.532 Exit Latency: Not Reported 00:19:46.532 Relative Read Throughput: 0 00:19:46.532 Relative Read Latency: 0 00:19:46.532 Relative Write Throughput: 0 00:19:46.532 Relative Write Latency: 0 00:19:46.532 Idle Power: Not Reported 00:19:46.532 Active Power: Not Reported 00:19:46.532 Non-Operational Permissive Mode: Not Supported 00:19:46.532 00:19:46.532 Health Information 00:19:46.532 ================== 00:19:46.532 Critical Warnings: 00:19:46.532 Available Spare Space: OK 00:19:46.532 Temperature: OK 00:19:46.532 Device Reliability: OK 00:19:46.532 Read Only: No 00:19:46.532 Volatile Memory Backup: OK 00:19:46.532 Current Temperature: 0 Kelvin (-273 Celsius) 00:19:46.532 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:19:46.532 Available Spare: 0% 00:19:46.532 Available Sp[2024-07-22 18:26:58.312225] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:19:46.532 [2024-07-22 18:26:58.319859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:19:46.532 [2024-07-22 18:26:58.319988] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:19:46.532 [2024-07-22 18:26:58.320023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.532 [2024-07-22 18:26:58.320041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.532 [2024-07-22 18:26:58.320054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.532 [2024-07-22 18:26:58.320068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.532 [2024-07-22 18:26:58.320235] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:19:46.532 [2024-07-22 18:26:58.320283] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:19:46.532 [2024-07-22 18:26:58.321242] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:46.532 [2024-07-22 18:26:58.323939] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:19:46.532 [2024-07-22 18:26:58.323994] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:19:46.533 [2024-07-22 18:26:58.324231] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:19:46.533 [2024-07-22 18:26:58.324270] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:19:46.533 [2024-07-22 18:26:58.325092] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:19:46.533 [2024-07-22 18:26:58.326581] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 9, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:19:46.533 are Threshold: 0% 00:19:46.533 Life Percentage Used: 0% 00:19:46.533 Data Units Read: 0 00:19:46.533 Data Units Written: 0 00:19:46.533 Host Read Commands: 0 00:19:46.533 Host Write Commands: 0 00:19:46.533 Controller Busy Time: 0 minutes 00:19:46.533 Power Cycles: 0 00:19:46.533 Power On Hours: 0 hours 00:19:46.533 Unsafe Shutdowns: 0 00:19:46.533 Unrecoverable Media Errors: 0 00:19:46.533 Lifetime Error Log Entries: 0 00:19:46.533 Warning Temperature Time: 0 minutes 00:19:46.533 Critical Temperature Time: 0 minutes 00:19:46.533 00:19:46.533 Number of Queues 00:19:46.533 ================ 00:19:46.533 Number of I/O Submission Queues: 127 00:19:46.533 Number of I/O Completion Queues: 127 00:19:46.533 00:19:46.533 Active Namespaces 00:19:46.533 ================= 00:19:46.533 Namespace ID:1 00:19:46.533 Error Recovery Timeout: Unlimited 00:19:46.533 Command Set Identifier: NVM (00h) 00:19:46.533 Deallocate: Supported 00:19:46.533 Deallocated/Unwritten Error: Not Supported 00:19:46.533 Deallocated Read Value: Unknown 00:19:46.533 Deallocate in Write Zeroes: Not Supported 00:19:46.533 Deallocated Guard Field: 0xFFFF 00:19:46.533 Flush: Supported 00:19:46.533 Reservation: Supported 00:19:46.533 Namespace Sharing Capabilities: Multiple Controllers 00:19:46.533 Size (in LBAs): 131072 (0GiB) 00:19:46.533 Capacity (in LBAs): 131072 (0GiB) 00:19:46.533 Utilization (in LBAs): 131072 (0GiB) 00:19:46.533 NGUID: 0D43F84BE610420FAE0A84944499F12A 00:19:46.533 UUID: 0d43f84b-e610-420f-ae0a-84944499f12a 00:19:46.533 Thin Provisioning: Not Supported 00:19:46.533 Per-NS Atomic Units: Yes 00:19:46.533 Atomic Boundary Size (Normal): 0 00:19:46.533 Atomic Boundary Size (PFail): 0 00:19:46.533 Atomic Boundary Offset: 0 00:19:46.533 Maximum Single Source Range Length: 65535 00:19:46.533 Maximum Copy Length: 65535 00:19:46.533 Maximum Source Range Count: 1 00:19:46.533 NGUID/EUI64 Never Reused: No 00:19:46.533 Namespace Write Protected: No 00:19:46.533 Number of LBA Formats: 1 00:19:46.533 Current LBA Format: LBA Format #00 00:19:46.533 LBA Format #00: Data Size: 512 Metadata Size: 0 00:19:46.533 00:19:46.533 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:19:46.801 [2024-07-22 18:26:58.783194] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:52.067 Initializing NVMe Controllers 00:19:52.067 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:19:52.067 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:19:52.067 Initialization complete. Launching workers. 00:19:52.067 ======================================================== 00:19:52.067 Latency(us) 00:19:52.067 Device Information : IOPS MiB/s Average min max 00:19:52.067 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 24749.20 96.68 5171.69 1469.12 13781.19 00:19:52.067 ======================================================== 00:19:52.067 Total : 24749.20 96.68 5171.69 1469.12 13781.19 00:19:52.067 00:19:52.067 [2024-07-22 18:27:03.879320] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:52.067 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:19:52.635 [2024-07-22 18:27:04.362124] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:57.903 Initializing NVMe Controllers 00:19:57.903 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:19:57.903 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:19:57.903 Initialization complete. Launching workers. 00:19:57.903 ======================================================== 00:19:57.903 Latency(us) 00:19:57.903 Device Information : IOPS MiB/s Average min max 00:19:57.903 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 24865.21 97.13 5144.68 1403.11 11838.62 00:19:57.903 ======================================================== 00:19:57.903 Total : 24865.21 97.13 5144.68 1403.11 11838.62 00:19:57.903 00:19:57.903 [2024-07-22 18:27:09.375944] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:57.903 18:27:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /home/vagrant/spdk_repo/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:19:57.903 [2024-07-22 18:27:09.795610] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:20:03.175 [2024-07-22 18:27:14.929520] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:20:03.175 Initializing NVMe Controllers 00:20:03.175 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:20:03.175 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:20:03.175 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:20:03.175 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:20:03.175 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:20:03.175 Initialization complete. Launching workers. 00:20:03.175 Starting thread on core 2 00:20:03.175 Starting thread on core 3 00:20:03.175 Starting thread on core 1 00:20:03.175 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:20:03.434 [2024-07-22 18:27:15.394111] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:20:06.774 [2024-07-22 18:27:18.568463] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:20:06.774 Initializing NVMe Controllers 00:20:06.774 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:20:06.774 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:20:06.774 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:20:06.774 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:20:06.774 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:20:06.774 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:20:06.774 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:20:06.774 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:20:06.774 Initialization complete. Launching workers. 00:20:06.774 Starting thread on core 1 with urgent priority queue 00:20:06.774 Starting thread on core 2 with urgent priority queue 00:20:06.774 Starting thread on core 3 with urgent priority queue 00:20:06.774 Starting thread on core 0 with urgent priority queue 00:20:06.774 SPDK bdev Controller (SPDK2 ) core 0: 448.00 IO/s 223.21 secs/100000 ios 00:20:06.774 SPDK bdev Controller (SPDK2 ) core 1: 512.00 IO/s 195.31 secs/100000 ios 00:20:06.774 SPDK bdev Controller (SPDK2 ) core 2: 789.33 IO/s 126.69 secs/100000 ios 00:20:06.774 SPDK bdev Controller (SPDK2 ) core 3: 789.33 IO/s 126.69 secs/100000 ios 00:20:06.774 ======================================================== 00:20:06.774 00:20:06.774 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:20:07.033 [2024-07-22 18:27:19.029947] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:20:07.033 Initializing NVMe Controllers 00:20:07.033 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:20:07.033 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:20:07.033 Namespace ID: 1 size: 0GB 00:20:07.033 Initialization complete. 00:20:07.033 INFO: using host memory buffer for IO 00:20:07.033 Hello world! 00:20:07.033 [2024-07-22 18:27:19.044415] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:20:07.291 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:20:07.549 [2024-07-22 18:27:19.495688] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:20:08.924 Initializing NVMe Controllers 00:20:08.924 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:20:08.924 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:20:08.924 Initialization complete. Launching workers. 00:20:08.924 submit (in ns) avg, min, max = 8802.2, 3985.5, 7037810.9 00:20:08.924 complete (in ns) avg, min, max = 36236.8, 2342.7, 7054009.5 00:20:08.924 00:20:08.924 Submit histogram 00:20:08.924 ================ 00:20:08.924 Range in us Cumulative Count 00:20:08.924 3.985 - 4.015: 0.0318% ( 3) 00:20:08.924 4.015 - 4.044: 0.0848% ( 5) 00:20:08.924 4.044 - 4.073: 0.2120% ( 12) 00:20:08.924 4.073 - 4.102: 0.6571% ( 42) 00:20:08.924 4.102 - 4.131: 2.4587% ( 170) 00:20:08.924 4.131 - 4.160: 6.0513% ( 339) 00:20:08.925 4.160 - 4.189: 10.7673% ( 445) 00:20:08.925 4.189 - 4.218: 15.9496% ( 489) 00:20:08.925 4.218 - 4.247: 21.4498% ( 519) 00:20:08.925 4.247 - 4.276: 28.0203% ( 620) 00:20:08.925 4.276 - 4.305: 35.3116% ( 688) 00:20:08.925 4.305 - 4.335: 43.5990% ( 782) 00:20:08.925 4.335 - 4.364: 51.7168% ( 766) 00:20:08.925 4.364 - 4.393: 58.3510% ( 626) 00:20:08.925 4.393 - 4.422: 63.6604% ( 501) 00:20:08.925 4.422 - 4.451: 68.1327% ( 422) 00:20:08.925 4.451 - 4.480: 71.9585% ( 361) 00:20:08.925 4.480 - 4.509: 74.9152% ( 279) 00:20:08.925 4.509 - 4.538: 78.0521% ( 296) 00:20:08.925 4.538 - 4.567: 80.5744% ( 238) 00:20:08.925 4.567 - 4.596: 82.6833% ( 199) 00:20:08.925 4.596 - 4.625: 84.8241% ( 202) 00:20:08.925 4.625 - 4.655: 86.6681% ( 174) 00:20:08.925 4.655 - 4.684: 88.3849% ( 162) 00:20:08.925 4.684 - 4.713: 89.6672% ( 121) 00:20:08.925 4.713 - 4.742: 90.9602% ( 122) 00:20:08.925 4.742 - 4.771: 91.8080% ( 80) 00:20:08.925 4.771 - 4.800: 92.7406% ( 88) 00:20:08.925 4.800 - 4.829: 93.4718% ( 69) 00:20:08.925 4.829 - 4.858: 93.8321% ( 34) 00:20:08.925 4.858 - 4.887: 94.2666% ( 41) 00:20:08.925 4.887 - 4.916: 94.5952% ( 31) 00:20:08.925 4.916 - 4.945: 94.8389% ( 23) 00:20:08.925 4.945 - 4.975: 95.0403% ( 19) 00:20:08.925 4.975 - 5.004: 95.1992% ( 15) 00:20:08.925 5.004 - 5.033: 95.3688% ( 16) 00:20:08.925 5.033 - 5.062: 95.4536% ( 8) 00:20:08.925 5.062 - 5.091: 95.5384% ( 8) 00:20:08.925 5.091 - 5.120: 95.5808% ( 4) 00:20:08.925 5.120 - 5.149: 95.6125% ( 3) 00:20:08.925 5.149 - 5.178: 95.6231% ( 1) 00:20:08.925 5.178 - 5.207: 95.6337% ( 1) 00:20:08.925 5.207 - 5.236: 95.6655% ( 3) 00:20:08.925 5.236 - 5.265: 95.6761% ( 1) 00:20:08.925 5.295 - 5.324: 95.6867% ( 1) 00:20:08.925 5.324 - 5.353: 95.6973% ( 1) 00:20:08.925 5.382 - 5.411: 95.7079% ( 1) 00:20:08.925 5.469 - 5.498: 95.7397% ( 3) 00:20:08.925 5.585 - 5.615: 95.7609% ( 2) 00:20:08.925 5.615 - 5.644: 95.7715% ( 1) 00:20:08.925 5.673 - 5.702: 95.7821% ( 1) 00:20:08.925 5.731 - 5.760: 95.7927% ( 1) 00:20:08.925 5.789 - 5.818: 95.8033% ( 1) 00:20:08.925 5.905 - 5.935: 95.8139% ( 1) 00:20:08.925 5.964 - 5.993: 95.8245% ( 1) 00:20:08.925 6.138 - 6.167: 95.8351% ( 1) 00:20:08.925 6.196 - 6.225: 95.8669% ( 3) 00:20:08.925 6.255 - 6.284: 95.8775% ( 1) 00:20:08.925 6.313 - 6.342: 95.8881% ( 1) 00:20:08.925 6.342 - 6.371: 95.9199% ( 3) 00:20:08.925 6.371 - 6.400: 95.9305% ( 1) 00:20:08.925 6.400 - 6.429: 95.9729% ( 4) 00:20:08.925 6.429 - 6.458: 95.9941% ( 2) 00:20:08.925 6.458 - 6.487: 96.0047% ( 1) 00:20:08.925 6.487 - 6.516: 96.0153% ( 1) 00:20:08.925 6.516 - 6.545: 96.0259% ( 1) 00:20:08.925 6.545 - 6.575: 96.0577% ( 3) 00:20:08.925 6.575 - 6.604: 96.1000% ( 4) 00:20:08.925 6.604 - 6.633: 96.1424% ( 4) 00:20:08.925 6.633 - 6.662: 96.1530% ( 1) 00:20:08.925 6.662 - 6.691: 96.2060% ( 5) 00:20:08.925 6.691 - 6.720: 96.2802% ( 7) 00:20:08.925 6.720 - 6.749: 96.2908% ( 1) 00:20:08.925 6.749 - 6.778: 96.3226% ( 3) 00:20:08.925 6.778 - 6.807: 96.3650% ( 4) 00:20:08.925 6.807 - 6.836: 96.3756% ( 1) 00:20:08.925 6.836 - 6.865: 96.3968% ( 2) 00:20:08.925 6.895 - 6.924: 96.4604% ( 6) 00:20:08.925 6.953 - 6.982: 96.4816% ( 2) 00:20:08.925 6.982 - 7.011: 96.4922% ( 1) 00:20:08.925 7.011 - 7.040: 96.5240% ( 3) 00:20:08.925 7.040 - 7.069: 96.5451% ( 2) 00:20:08.925 7.098 - 7.127: 96.5663% ( 2) 00:20:08.925 7.127 - 7.156: 96.5769% ( 1) 00:20:08.925 7.156 - 7.185: 96.5875% ( 1) 00:20:08.925 7.185 - 7.215: 96.5981% ( 1) 00:20:08.925 7.215 - 7.244: 96.6193% ( 2) 00:20:08.925 7.244 - 7.273: 96.6299% ( 1) 00:20:08.925 7.273 - 7.302: 96.6405% ( 1) 00:20:08.925 7.302 - 7.331: 96.6723% ( 3) 00:20:08.925 7.331 - 7.360: 96.6829% ( 1) 00:20:08.925 7.360 - 7.389: 96.7465% ( 6) 00:20:08.925 7.389 - 7.418: 96.7571% ( 1) 00:20:08.925 7.418 - 7.447: 96.7889% ( 3) 00:20:08.925 7.447 - 7.505: 96.8313% ( 4) 00:20:08.925 7.505 - 7.564: 96.9055% ( 7) 00:20:08.925 7.564 - 7.622: 96.9691% ( 6) 00:20:08.925 7.622 - 7.680: 97.0220% ( 5) 00:20:08.925 7.680 - 7.738: 97.0644% ( 4) 00:20:08.925 7.738 - 7.796: 97.1068% ( 4) 00:20:08.925 7.796 - 7.855: 97.2446% ( 13) 00:20:08.925 7.855 - 7.913: 97.3612% ( 11) 00:20:08.925 7.913 - 7.971: 97.4142% ( 5) 00:20:08.925 7.971 - 8.029: 97.4883% ( 7) 00:20:08.925 8.029 - 8.087: 97.5519% ( 6) 00:20:08.925 8.087 - 8.145: 97.5731% ( 2) 00:20:08.925 8.145 - 8.204: 97.5943% ( 2) 00:20:08.925 8.204 - 8.262: 97.6367% ( 4) 00:20:08.925 8.262 - 8.320: 97.6685% ( 3) 00:20:08.925 8.320 - 8.378: 97.6897% ( 2) 00:20:08.925 8.378 - 8.436: 97.7215% ( 3) 00:20:08.925 8.436 - 8.495: 97.7533% ( 3) 00:20:08.925 8.495 - 8.553: 97.7639% ( 1) 00:20:08.925 8.553 - 8.611: 97.7957% ( 3) 00:20:08.925 8.611 - 8.669: 97.8381% ( 4) 00:20:08.925 8.669 - 8.727: 97.8487% ( 1) 00:20:08.925 8.785 - 8.844: 97.8699% ( 2) 00:20:08.925 8.844 - 8.902: 97.8805% ( 1) 00:20:08.925 8.960 - 9.018: 97.8911% ( 1) 00:20:08.925 9.076 - 9.135: 97.9017% ( 1) 00:20:08.925 9.193 - 9.251: 97.9334% ( 3) 00:20:08.925 9.251 - 9.309: 97.9652% ( 3) 00:20:08.925 9.309 - 9.367: 97.9864% ( 2) 00:20:08.925 9.425 - 9.484: 97.9970% ( 1) 00:20:08.925 9.484 - 9.542: 98.0288% ( 3) 00:20:08.925 9.542 - 9.600: 98.0818% ( 5) 00:20:08.925 9.600 - 9.658: 98.1030% ( 2) 00:20:08.925 9.658 - 9.716: 98.1136% ( 1) 00:20:08.925 9.775 - 9.833: 98.1560% ( 4) 00:20:08.925 9.833 - 9.891: 98.1666% ( 1) 00:20:08.925 9.891 - 9.949: 98.1878% ( 2) 00:20:08.925 9.949 - 10.007: 98.1984% ( 1) 00:20:08.925 10.007 - 10.065: 98.2302% ( 3) 00:20:08.925 10.065 - 10.124: 98.2832% ( 5) 00:20:08.925 10.124 - 10.182: 98.3256% ( 4) 00:20:08.925 10.298 - 10.356: 98.3468% ( 2) 00:20:08.925 10.356 - 10.415: 98.3680% ( 2) 00:20:08.925 10.415 - 10.473: 98.3786% ( 1) 00:20:08.925 10.589 - 10.647: 98.3891% ( 1) 00:20:08.925 10.647 - 10.705: 98.4209% ( 3) 00:20:08.925 10.764 - 10.822: 98.4527% ( 3) 00:20:08.925 10.938 - 10.996: 98.4633% ( 1) 00:20:08.925 10.996 - 11.055: 98.4845% ( 2) 00:20:08.925 11.055 - 11.113: 98.5269% ( 4) 00:20:08.925 11.287 - 11.345: 98.5375% ( 1) 00:20:08.925 11.345 - 11.404: 98.5587% ( 2) 00:20:08.925 11.520 - 11.578: 98.5693% ( 1) 00:20:08.925 11.578 - 11.636: 98.5799% ( 1) 00:20:08.925 11.636 - 11.695: 98.6011% ( 2) 00:20:08.925 11.695 - 11.753: 98.6117% ( 1) 00:20:08.925 11.811 - 11.869: 98.6223% ( 1) 00:20:08.925 11.869 - 11.927: 98.6435% ( 2) 00:20:08.925 11.927 - 11.985: 98.6753% ( 3) 00:20:08.925 11.985 - 12.044: 98.6859% ( 1) 00:20:08.925 12.044 - 12.102: 98.7283% ( 4) 00:20:08.925 12.102 - 12.160: 98.7389% ( 1) 00:20:08.925 12.218 - 12.276: 98.7495% ( 1) 00:20:08.925 12.335 - 12.393: 98.7601% ( 1) 00:20:08.925 12.393 - 12.451: 98.7813% ( 2) 00:20:08.925 12.451 - 12.509: 98.8025% ( 2) 00:20:08.925 12.509 - 12.567: 98.8131% ( 1) 00:20:08.925 12.742 - 12.800: 98.8237% ( 1) 00:20:08.925 12.800 - 12.858: 98.8343% ( 1) 00:20:08.925 12.858 - 12.916: 98.8448% ( 1) 00:20:08.925 12.916 - 12.975: 98.8554% ( 1) 00:20:08.925 12.975 - 13.033: 98.8660% ( 1) 00:20:08.925 13.033 - 13.091: 98.8766% ( 1) 00:20:08.925 13.207 - 13.265: 98.8978% ( 2) 00:20:08.925 13.265 - 13.324: 98.9084% ( 1) 00:20:08.925 13.324 - 13.382: 98.9190% ( 1) 00:20:08.925 13.382 - 13.440: 98.9508% ( 3) 00:20:08.925 13.440 - 13.498: 98.9720% ( 2) 00:20:08.925 13.498 - 13.556: 98.9932% ( 2) 00:20:08.925 13.556 - 13.615: 99.0038% ( 1) 00:20:08.925 13.615 - 13.673: 99.0250% ( 2) 00:20:08.925 13.673 - 13.731: 99.0462% ( 2) 00:20:08.925 13.731 - 13.789: 99.0568% ( 1) 00:20:08.925 13.789 - 13.847: 99.0780% ( 2) 00:20:08.925 13.905 - 13.964: 99.1098% ( 3) 00:20:08.925 14.022 - 14.080: 99.1522% ( 4) 00:20:08.925 14.138 - 14.196: 99.1628% ( 1) 00:20:08.925 14.196 - 14.255: 99.1734% ( 1) 00:20:08.925 14.429 - 14.487: 99.1840% ( 1) 00:20:08.925 14.487 - 14.545: 99.2158% ( 3) 00:20:08.925 14.545 - 14.604: 99.2370% ( 2) 00:20:08.925 14.836 - 14.895: 99.2476% ( 1) 00:20:08.925 14.895 - 15.011: 99.2582% ( 1) 00:20:08.925 15.011 - 15.127: 99.2688% ( 1) 00:20:08.925 15.244 - 15.360: 99.2794% ( 1) 00:20:08.925 15.360 - 15.476: 99.3111% ( 3) 00:20:08.925 15.593 - 15.709: 99.3217% ( 1) 00:20:08.926 15.709 - 15.825: 99.3429% ( 2) 00:20:08.926 15.825 - 15.942: 99.3535% ( 1) 00:20:08.926 15.942 - 16.058: 99.3641% ( 1) 00:20:08.926 16.058 - 16.175: 99.3853% ( 2) 00:20:08.926 16.407 - 16.524: 99.3959% ( 1) 00:20:08.926 16.640 - 16.756: 99.4065% ( 1) 00:20:08.926 18.036 - 18.153: 99.4171% ( 1) 00:20:08.926 18.385 - 18.502: 99.4277% ( 1) 00:20:08.926 18.735 - 18.851: 99.4383% ( 1) 00:20:08.926 18.851 - 18.967: 99.4595% ( 2) 00:20:08.926 18.967 - 19.084: 99.4701% ( 1) 00:20:08.926 19.084 - 19.200: 99.4913% ( 2) 00:20:08.926 19.200 - 19.316: 99.5125% ( 2) 00:20:08.926 19.316 - 19.433: 99.5549% ( 4) 00:20:08.926 19.433 - 19.549: 99.5761% ( 2) 00:20:08.926 19.549 - 19.665: 99.6185% ( 4) 00:20:08.926 19.665 - 19.782: 99.6291% ( 1) 00:20:08.926 19.782 - 19.898: 99.6397% ( 1) 00:20:08.926 20.015 - 20.131: 99.6609% ( 2) 00:20:08.926 20.131 - 20.247: 99.6821% ( 2) 00:20:08.926 20.247 - 20.364: 99.6927% ( 1) 00:20:08.926 20.364 - 20.480: 99.7139% ( 2) 00:20:08.926 20.480 - 20.596: 99.7245% ( 1) 00:20:08.926 20.596 - 20.713: 99.7563% ( 3) 00:20:08.926 20.713 - 20.829: 99.7774% ( 2) 00:20:08.926 20.829 - 20.945: 99.7880% ( 1) 00:20:08.926 21.062 - 21.178: 99.8092% ( 2) 00:20:08.926 21.178 - 21.295: 99.8198% ( 1) 00:20:08.926 21.295 - 21.411: 99.8304% ( 1) 00:20:08.926 22.109 - 22.225: 99.8410% ( 1) 00:20:08.926 23.971 - 24.087: 99.8516% ( 1) 00:20:08.926 26.182 - 26.298: 99.8622% ( 1) 00:20:08.926 26.298 - 26.415: 99.8728% ( 1) 00:20:08.926 28.276 - 28.393: 99.8834% ( 1) 00:20:08.926 28.509 - 28.625: 99.8940% ( 1) 00:20:08.926 28.742 - 28.858: 99.9046% ( 1) 00:20:08.926 3991.738 - 4021.527: 99.9364% ( 3) 00:20:08.926 4021.527 - 4051.316: 99.9788% ( 4) 00:20:08.926 4051.316 - 4081.105: 99.9894% ( 1) 00:20:08.926 7030.225 - 7060.015: 100.0000% ( 1) 00:20:08.926 00:20:08.926 Complete histogram 00:20:08.926 ================== 00:20:08.926 Range in us Cumulative Count 00:20:08.926 2.342 - 2.356: 0.2331% ( 22) 00:20:08.926 2.356 - 2.371: 0.6783% ( 42) 00:20:08.926 2.371 - 2.385: 0.8796% ( 19) 00:20:08.926 2.385 - 2.400: 0.9220% ( 4) 00:20:08.926 2.400 - 2.415: 0.9856% ( 6) 00:20:08.926 2.415 - 2.429: 3.5714% ( 244) 00:20:08.926 2.429 - 2.444: 16.9881% ( 1266) 00:20:08.926 2.444 - 2.458: 26.3883% ( 887) 00:20:08.926 2.458 - 2.473: 29.0059% ( 247) 00:20:08.926 2.473 - 2.487: 30.1187% ( 105) 00:20:08.926 2.487 - 2.502: 30.9983% ( 83) 00:20:08.926 2.502 - 2.516: 36.5515% ( 524) 00:20:08.926 2.516 - 2.531: 55.9559% ( 1831) 00:20:08.926 2.531 - 2.545: 68.9063% ( 1222) 00:20:08.926 2.545 - 2.560: 73.5481% ( 438) 00:20:08.926 2.560 - 2.575: 75.5829% ( 192) 00:20:08.926 2.575 - 2.589: 77.1195% ( 145) 00:20:08.926 2.589 - 2.604: 78.3913% ( 120) 00:20:08.926 2.604 - 2.618: 79.2815% ( 84) 00:20:08.926 2.618 - 2.633: 81.2632% ( 187) 00:20:08.926 2.633 - 2.647: 84.2624% ( 283) 00:20:08.926 2.647 - 2.662: 86.1064% ( 174) 00:20:08.926 2.662 - 2.676: 87.0708% ( 91) 00:20:08.926 2.676 - 2.691: 87.5795% ( 48) 00:20:08.926 2.691 - 2.705: 88.1518% ( 54) 00:20:08.926 2.705 - 2.720: 89.2327% ( 102) 00:20:08.926 2.720 - 2.735: 91.6278% ( 226) 00:20:08.926 2.735 - 2.749: 93.8003% ( 205) 00:20:08.926 2.749 - 2.764: 94.7223% ( 87) 00:20:08.926 2.764 - 2.778: 95.1886% ( 44) 00:20:08.926 2.778 - 2.793: 95.5172% ( 31) 00:20:08.926 2.793 - 2.807: 95.9199% ( 38) 00:20:08.926 2.807 - 2.822: 96.2166% ( 28) 00:20:08.926 2.822 - 2.836: 96.4286% ( 20) 00:20:08.926 2.836 - 2.851: 96.7147% ( 27) 00:20:08.926 2.851 - 2.865: 96.8737% ( 15) 00:20:08.926 2.865 - 2.880: 97.0326% ( 15) 00:20:08.926 2.880 - 2.895: 97.1916% ( 15) 00:20:08.926 2.895 - 2.909: 97.2764% ( 8) 00:20:08.926 2.909 - 2.924: 97.4036% ( 12) 00:20:08.926 2.924 - 2.938: 97.5731% ( 16) 00:20:08.926 2.938 - 2.953: 97.5943% ( 2) 00:20:08.926 2.953 - 2.967: 97.6579% ( 6) 00:20:08.926 2.967 - 2.982: 97.7109% ( 5) 00:20:08.926 2.982 - 2.996: 97.7745% ( 6) 00:20:08.926 2.996 - 3.011: 97.8169% ( 4) 00:20:08.926 3.011 - 3.025: 97.8699% ( 5) 00:20:08.926 3.025 - 3.040: 97.9017% ( 3) 00:20:08.926 3.040 - 3.055: 97.9546% ( 5) 00:20:08.926 3.055 - 3.069: 98.0288% ( 7) 00:20:08.926 3.069 - 3.084: 98.0394% ( 1) 00:20:08.926 3.084 - 3.098: 98.0500% ( 1) 00:20:08.926 3.098 - 3.113: 98.1136% ( 6) 00:20:08.926 3.113 - 3.127: 98.1242% ( 1) 00:20:08.926 3.127 - 3.142: 98.1454% ( 2) 00:20:08.926 3.156 - 3.171: 98.1666% ( 2) 00:20:08.926 3.185 - 3.200: 98.1772% ( 1) 00:20:08.926 3.244 - 3.258: 98.1878% ( 1) 00:20:08.926 3.273 - 3.287: 98.1984% ( 1) 00:20:08.926 3.389 - 3.404: 98.2196% ( 2) 00:20:08.926 3.418 - 3.433: 98.2302% ( 1) 00:20:08.926 3.433 - 3.447: 98.2408% ( 1) 00:20:08.926 3.811 - 3.840: 98.2514% ( 1) 00:20:08.926 3.985 - 4.015: 98.2620% ( 1) 00:20:08.926 4.160 - 4.189: 98.2726% ( 1) 00:20:08.926 4.305 - 4.335: 98.2832% ( 1) 00:20:08.926 4.800 - 4.829: 98.2938% ( 1) 00:20:08.926 4.887 - 4.916: 98.3044% ( 1) 00:20:08.926 4.945 - 4.975: 98.3150% ( 1) 00:20:08.926 4.975 - 5.004: 98.3256% ( 1) 00:20:08.926 5.062 - 5.091: 98.3362% ( 1) 00:20:08.926 5.353 - 5.382: 98.3468% ( 1) 00:20:08.926 5.382 - 5.411: 98.3574% ( 1) 00:20:08.926 5.411 - 5.440: 98.3680% ( 1) 00:20:08.926 5.440 - 5.469: 98.3891% ( 2) 00:20:08.926 5.527 - 5.556: 98.3997% ( 1) 00:20:08.926 5.673 - 5.702: 98.4209% ( 2) 00:20:08.926 5.702 - 5.731: 98.4315% ( 1) 00:20:08.926 5.818 - 5.847: 98.4421% ( 1) 00:20:08.926 6.022 - 6.051: 98.4527% ( 1) 00:20:08.926 6.255 - 6.284: 98.4739% ( 2) 00:20:08.926 6.429 - 6.458: 98.4845% ( 1) 00:20:08.926 6.691 - 6.720: 98.5057% ( 2) 00:20:08.926 6.807 - 6.836: 98.5163% ( 1) 00:20:08.926 6.895 - 6.924: 98.5269% ( 1) 00:20:08.926 7.069 - 7.098: 98.5375% ( 1) 00:20:08.926 7.244 - 7.273: 98.5481% ( 1) 00:20:08.926 7.680 - 7.738: 98.5587% ( 1) 00:20:08.926 7.855 - 7.913: 98.5693% ( 1) 00:20:08.926 7.971 - 8.029: 98.5905% ( 2) 00:20:08.926 8.204 - 8.262: 98.6011% ( 1) 00:20:08.926 8.495 - 8.553: 98.6117% ( 1) 00:20:08.926 8.553 - 8.611: 98.6223% ( 1) 00:20:08.926 9.135 - 9.193: 98.6329% ( 1) 00:20:08.926 9.193 - 9.251: 98.6435% ( 1) 00:20:08.926 9.251 - 9.309: 98.6753% ( 3) 00:20:08.926 9.367 - 9.425: 98.6859% ( 1) 00:20:08.926 9.425 - 9.484: 98.7071% ( 2) 00:20:08.926 9.542 - 9.600: 98.7177% ( 1) 00:20:08.926 9.658 - 9.716: 98.7283% ( 1) 00:20:08.926 9.891 - 9.949: 98.7389% ( 1) 00:20:08.926 10.647 - 10.705: 98.7495% ( 1) 00:20:08.926 10.880 - 10.938: 98.7707% ( 2) 00:20:08.926 11.171 - 11.229: 98.7813% ( 1) 00:20:08.926 11.229 - 11.287: 98.7919% ( 1) 00:20:08.926 11.695 - 11.753: 98.8025% ( 1) 00:20:08.926 11.753 - 11.811: 98.8343% ( 3) 00:20:08.926 11.869 - 11.927: 98.8448%[2024-07-22 18:27:20.596022] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:20:08.926 ( 1) 00:20:08.926 12.218 - 12.276: 98.8554% ( 1) 00:20:08.926 12.393 - 12.451: 98.8660% ( 1) 00:20:08.926 12.858 - 12.916: 98.8766% ( 1) 00:20:08.926 12.916 - 12.975: 98.8872% ( 1) 00:20:08.926 13.498 - 13.556: 98.8978% ( 1) 00:20:08.926 14.720 - 14.778: 98.9084% ( 1) 00:20:08.926 15.011 - 15.127: 98.9190% ( 1) 00:20:08.926 15.709 - 15.825: 98.9296% ( 1) 00:20:08.926 17.222 - 17.338: 98.9402% ( 1) 00:20:08.926 17.455 - 17.571: 98.9614% ( 2) 00:20:08.926 17.571 - 17.687: 98.9720% ( 1) 00:20:08.926 17.687 - 17.804: 98.9932% ( 2) 00:20:08.926 17.804 - 17.920: 99.0250% ( 3) 00:20:08.926 17.920 - 18.036: 99.0356% ( 1) 00:20:08.926 18.153 - 18.269: 99.0462% ( 1) 00:20:08.926 18.269 - 18.385: 99.0568% ( 1) 00:20:08.926 18.385 - 18.502: 99.0674% ( 1) 00:20:08.926 18.618 - 18.735: 99.0886% ( 2) 00:20:08.926 18.735 - 18.851: 99.0992% ( 1) 00:20:08.926 18.967 - 19.084: 99.1098% ( 1) 00:20:08.926 19.084 - 19.200: 99.1204% ( 1) 00:20:08.926 19.200 - 19.316: 99.1310% ( 1) 00:20:08.926 19.433 - 19.549: 99.1416% ( 1) 00:20:08.926 20.015 - 20.131: 99.1522% ( 1) 00:20:08.926 21.411 - 21.527: 99.1628% ( 1) 00:20:08.926 22.225 - 22.342: 99.1734% ( 1) 00:20:08.926 3961.949 - 3991.738: 99.2900% ( 11) 00:20:08.926 3991.738 - 4021.527: 99.6927% ( 38) 00:20:08.926 4021.527 - 4051.316: 99.9576% ( 25) 00:20:08.926 4051.316 - 4081.105: 99.9682% ( 1) 00:20:08.926 4081.105 - 4110.895: 99.9788% ( 1) 00:20:08.926 5004.567 - 5034.356: 99.9894% ( 1) 00:20:08.926 7030.225 - 7060.015: 100.0000% ( 1) 00:20:08.926 00:20:08.927 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:20:08.927 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:20:08.927 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:20:08.927 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:20:08.927 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_get_subsystems 00:20:09.185 [ 00:20:09.185 { 00:20:09.185 "allow_any_host": true, 00:20:09.185 "hosts": [], 00:20:09.185 "listen_addresses": [], 00:20:09.185 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:09.185 "subtype": "Discovery" 00:20:09.185 }, 00:20:09.185 { 00:20:09.185 "allow_any_host": true, 00:20:09.185 "hosts": [], 00:20:09.185 "listen_addresses": [ 00:20:09.185 { 00:20:09.185 "adrfam": "IPv4", 00:20:09.185 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:20:09.185 "trsvcid": "0", 00:20:09.185 "trtype": "VFIOUSER" 00:20:09.185 } 00:20:09.185 ], 00:20:09.185 "max_cntlid": 65519, 00:20:09.185 "max_namespaces": 32, 00:20:09.185 "min_cntlid": 1, 00:20:09.185 "model_number": "SPDK bdev Controller", 00:20:09.185 "namespaces": [ 00:20:09.185 { 00:20:09.185 "bdev_name": "Malloc1", 00:20:09.185 "name": "Malloc1", 00:20:09.185 "nguid": "F30C087608A84ACB8C0C9C7CA865FAE4", 00:20:09.185 "nsid": 1, 00:20:09.185 "uuid": "f30c0876-08a8-4acb-8c0c-9c7ca865fae4" 00:20:09.185 }, 00:20:09.185 { 00:20:09.185 "bdev_name": "Malloc3", 00:20:09.185 "name": "Malloc3", 00:20:09.185 "nguid": "95609156E19A464F8BD41A841EB62150", 00:20:09.185 "nsid": 2, 00:20:09.185 "uuid": "95609156-e19a-464f-8bd4-1a841eb62150" 00:20:09.185 } 00:20:09.185 ], 00:20:09.185 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:20:09.185 "serial_number": "SPDK1", 00:20:09.185 "subtype": "NVMe" 00:20:09.185 }, 00:20:09.185 { 00:20:09.185 "allow_any_host": true, 00:20:09.185 "hosts": [], 00:20:09.185 "listen_addresses": [ 00:20:09.185 { 00:20:09.185 "adrfam": "IPv4", 00:20:09.185 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:20:09.185 "trsvcid": "0", 00:20:09.185 "trtype": "VFIOUSER" 00:20:09.185 } 00:20:09.185 ], 00:20:09.185 "max_cntlid": 65519, 00:20:09.185 "max_namespaces": 32, 00:20:09.185 "min_cntlid": 1, 00:20:09.185 "model_number": "SPDK bdev Controller", 00:20:09.185 "namespaces": [ 00:20:09.185 { 00:20:09.185 "bdev_name": "Malloc2", 00:20:09.185 "name": "Malloc2", 00:20:09.185 "nguid": "0D43F84BE610420FAE0A84944499F12A", 00:20:09.185 "nsid": 1, 00:20:09.185 "uuid": "0d43f84b-e610-420f-ae0a-84944499f12a" 00:20:09.185 } 00:20:09.185 ], 00:20:09.185 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:20:09.185 "serial_number": "SPDK2", 00:20:09.185 "subtype": "NVMe" 00:20:09.185 } 00:20:09.185 ] 00:20:09.185 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:20:09.185 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=84790 00:20:09.185 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:20:09.185 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:20:09.185 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:20:09.185 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:09.185 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:20:09.185 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # i=1 00:20:09.185 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # sleep 0.1 00:20:09.185 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:09.185 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:20:09.185 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # i=2 00:20:09.185 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # sleep 0.1 00:20:09.451 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:09.451 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # '[' 2 -lt 200 ']' 00:20:09.451 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # i=3 00:20:09.451 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # sleep 0.1 00:20:09.451 [2024-07-22 18:27:21.284864] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:20:09.451 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:09.451 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:09.451 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:20:09.451 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:20:09.451 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:20:09.711 Malloc4 00:20:09.711 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:20:09.969 [2024-07-22 18:27:21.982269] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:20:10.228 18:27:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_get_subsystems 00:20:10.228 Asynchronous Event Request test 00:20:10.228 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:20:10.228 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:20:10.228 Registering asynchronous event callbacks... 00:20:10.228 Starting namespace attribute notice tests for all controllers... 00:20:10.228 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:20:10.228 aer_cb - Changed Namespace 00:20:10.228 Cleaning up... 00:20:10.228 [ 00:20:10.228 { 00:20:10.228 "allow_any_host": true, 00:20:10.228 "hosts": [], 00:20:10.228 "listen_addresses": [], 00:20:10.228 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:10.228 "subtype": "Discovery" 00:20:10.228 }, 00:20:10.228 { 00:20:10.228 "allow_any_host": true, 00:20:10.228 "hosts": [], 00:20:10.228 "listen_addresses": [ 00:20:10.228 { 00:20:10.228 "adrfam": "IPv4", 00:20:10.228 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:20:10.228 "trsvcid": "0", 00:20:10.228 "trtype": "VFIOUSER" 00:20:10.228 } 00:20:10.228 ], 00:20:10.228 "max_cntlid": 65519, 00:20:10.228 "max_namespaces": 32, 00:20:10.228 "min_cntlid": 1, 00:20:10.228 "model_number": "SPDK bdev Controller", 00:20:10.228 "namespaces": [ 00:20:10.228 { 00:20:10.228 "bdev_name": "Malloc1", 00:20:10.228 "name": "Malloc1", 00:20:10.228 "nguid": "F30C087608A84ACB8C0C9C7CA865FAE4", 00:20:10.228 "nsid": 1, 00:20:10.228 "uuid": "f30c0876-08a8-4acb-8c0c-9c7ca865fae4" 00:20:10.228 }, 00:20:10.228 { 00:20:10.228 "bdev_name": "Malloc3", 00:20:10.228 "name": "Malloc3", 00:20:10.228 "nguid": "95609156E19A464F8BD41A841EB62150", 00:20:10.228 "nsid": 2, 00:20:10.228 "uuid": "95609156-e19a-464f-8bd4-1a841eb62150" 00:20:10.228 } 00:20:10.228 ], 00:20:10.228 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:20:10.228 "serial_number": "SPDK1", 00:20:10.228 "subtype": "NVMe" 00:20:10.228 }, 00:20:10.228 { 00:20:10.228 "allow_any_host": true, 00:20:10.228 "hosts": [], 00:20:10.228 "listen_addresses": [ 00:20:10.228 { 00:20:10.228 "adrfam": "IPv4", 00:20:10.228 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:20:10.228 "trsvcid": "0", 00:20:10.228 "trtype": "VFIOUSER" 00:20:10.228 } 00:20:10.228 ], 00:20:10.228 "max_cntlid": 65519, 00:20:10.228 "max_namespaces": 32, 00:20:10.228 "min_cntlid": 1, 00:20:10.228 "model_number": "SPDK bdev Controller", 00:20:10.228 "namespaces": [ 00:20:10.228 { 00:20:10.228 "bdev_name": "Malloc2", 00:20:10.228 "name": "Malloc2", 00:20:10.228 "nguid": "0D43F84BE610420FAE0A84944499F12A", 00:20:10.228 "nsid": 1, 00:20:10.228 "uuid": "0d43f84b-e610-420f-ae0a-84944499f12a" 00:20:10.228 }, 00:20:10.228 { 00:20:10.228 "bdev_name": "Malloc4", 00:20:10.228 "name": "Malloc4", 00:20:10.228 "nguid": "432832F00BDD4F9891CBA466642BE77F", 00:20:10.228 "nsid": 2, 00:20:10.228 "uuid": "432832f0-0bdd-4f98-91cb-a466642be77f" 00:20:10.228 } 00:20:10.228 ], 00:20:10.228 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:20:10.228 "serial_number": "SPDK2", 00:20:10.228 "subtype": "NVMe" 00:20:10.228 } 00:20:10.228 ] 00:20:10.542 18:27:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 84790 00:20:10.542 18:27:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:20:10.542 18:27:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 84076 00:20:10.542 18:27:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 84076 ']' 00:20:10.542 18:27:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 84076 00:20:10.543 18:27:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:20:10.543 18:27:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:10.543 18:27:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84076 00:20:10.543 18:27:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:10.543 18:27:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:10.543 killing process with pid 84076 00:20:10.543 18:27:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84076' 00:20:10.543 18:27:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 84076 00:20:10.543 18:27:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 84076 00:20:12.446 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:20:12.446 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:20:12.446 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:20:12.446 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:20:12.446 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:20:12.446 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=84856 00:20:12.446 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:20:12.446 Process pid: 84856 00:20:12.446 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 84856' 00:20:12.446 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:20:12.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:12.446 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 84856 00:20:12.446 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 84856 ']' 00:20:12.446 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:12.446 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:12.446 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:12.446 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:12.446 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:20:12.446 [2024-07-22 18:27:24.288317] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:20:12.446 [2024-07-22 18:27:24.290969] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:20:12.446 [2024-07-22 18:27:24.291091] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:12.446 [2024-07-22 18:27:24.456204] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:13.014 [2024-07-22 18:27:24.739598] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:13.014 [2024-07-22 18:27:24.739691] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:13.014 [2024-07-22 18:27:24.739714] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:13.014 [2024-07-22 18:27:24.739728] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:13.014 [2024-07-22 18:27:24.739760] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:13.014 [2024-07-22 18:27:24.739974] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:13.014 [2024-07-22 18:27:24.740856] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:13.014 [2024-07-22 18:27:24.740954] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:13.014 [2024-07-22 18:27:24.740958] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:13.272 [2024-07-22 18:27:25.111617] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:20:13.273 [2024-07-22 18:27:25.113668] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:20:13.273 [2024-07-22 18:27:25.114196] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:20:13.273 [2024-07-22 18:27:25.115452] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:20:13.273 [2024-07-22 18:27:25.116280] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:20:13.273 18:27:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:13.273 18:27:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:20:13.273 18:27:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:20:14.208 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:20:14.466 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:20:14.466 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:20:14.466 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:20:14.466 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:20:14.466 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:20:15.041 Malloc1 00:20:15.041 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:20:15.326 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:20:15.586 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:20:15.844 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:20:15.845 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:20:15.845 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:20:16.103 Malloc2 00:20:16.103 18:27:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:20:16.361 18:27:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:20:16.619 18:27:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:20:16.877 18:27:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:20:16.877 18:27:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 84856 00:20:16.877 18:27:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 84856 ']' 00:20:16.877 18:27:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 84856 00:20:16.877 18:27:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:20:16.877 18:27:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:16.877 18:27:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84856 00:20:16.877 18:27:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:16.877 killing process with pid 84856 00:20:16.877 18:27:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:16.877 18:27:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84856' 00:20:16.877 18:27:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 84856 00:20:16.877 18:27:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 84856 00:20:18.777 18:27:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:20:18.777 18:27:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:20:18.777 00:20:18.777 real 1m2.044s 00:20:18.777 user 3m56.028s 00:20:18.777 sys 0m5.865s 00:20:18.777 18:27:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:18.777 ************************************ 00:20:18.777 END TEST nvmf_vfio_user 00:20:18.777 ************************************ 00:20:18.777 18:27:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:20:18.777 18:27:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:20:18.777 18:27:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /home/vagrant/spdk_repo/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:20:18.777 18:27:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:18.777 18:27:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:18.777 18:27:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:18.777 ************************************ 00:20:18.777 START TEST nvmf_vfio_user_nvme_compliance 00:20:18.777 ************************************ 00:20:18.777 18:27:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:20:18.777 * Looking for test storage... 00:20:18.777 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/compliance 00:20:18.777 18:27:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:18.777 18:27:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:20:18.777 18:27:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:18.777 18:27:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:18.777 18:27:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:18.777 18:27:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:18.777 18:27:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:18.778 18:27:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:18.778 18:27:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:18.778 18:27:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:18.778 18:27:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:18.778 18:27:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:18.778 18:27:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:20:18.778 18:27:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=0b8484e2-e129-4a11-8748-0b3c728771da 00:20:18.778 18:27:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:18.778 18:27:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:18.778 18:27:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:18.778 18:27:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:18.778 18:27:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:18.778 18:27:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:18.778 18:27:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:18.778 18:27:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:18.778 18:27:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:18.778 18:27:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:18.778 18:27:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:18.778 18:27:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:20:18.778 18:27:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:18.778 18:27:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:20:18.778 18:27:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:18.778 18:27:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:18.778 18:27:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:18.778 18:27:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:18.778 18:27:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:18.778 18:27:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:18.778 18:27:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:18.778 18:27:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:18.778 18:27:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:18.778 18:27:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:18.778 18:27:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:20:18.778 18:27:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:20:18.778 18:27:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:20:18.778 18:27:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=85066 00:20:18.778 Process pid: 85066 00:20:18.778 18:27:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:20:18.778 18:27:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 85066' 00:20:18.778 18:27:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:20:18.778 18:27:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 85066 00:20:18.778 18:27:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@829 -- # '[' -z 85066 ']' 00:20:18.778 18:27:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:18.778 18:27:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:18.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:18.778 18:27:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:18.778 18:27:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:18.778 18:27:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:20:19.036 [2024-07-22 18:27:30.860105] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:20:19.036 [2024-07-22 18:27:30.860319] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:19.036 [2024-07-22 18:27:31.032389] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:19.601 [2024-07-22 18:27:31.313506] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:19.601 [2024-07-22 18:27:31.313625] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:19.601 [2024-07-22 18:27:31.313656] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:19.601 [2024-07-22 18:27:31.313688] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:19.601 [2024-07-22 18:27:31.313712] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:19.601 [2024-07-22 18:27:31.314071] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:19.601 [2024-07-22 18:27:31.314976] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:19.601 [2024-07-22 18:27:31.314995] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:19.859 18:27:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:19.859 18:27:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@862 -- # return 0 00:20:19.859 18:27:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:20:21.234 18:27:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:20:21.234 18:27:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:20:21.234 18:27:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:20:21.234 18:27:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.234 18:27:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:20:21.234 18:27:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.234 18:27:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:20:21.234 18:27:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:20:21.234 18:27:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.234 18:27:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:20:21.234 malloc0 00:20:21.234 18:27:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.234 18:27:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:20:21.234 18:27:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.234 18:27:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:20:21.234 18:27:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.234 18:27:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:20:21.234 18:27:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.234 18:27:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:20:21.234 18:27:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.234 18:27:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:20:21.234 18:27:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.234 18:27:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:20:21.234 18:27:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.234 18:27:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:20:21.234 00:20:21.234 00:20:21.234 CUnit - A unit testing framework for C - Version 2.1-3 00:20:21.234 http://cunit.sourceforge.net/ 00:20:21.234 00:20:21.234 00:20:21.234 Suite: nvme_compliance 00:20:21.511 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-22 18:27:33.275064] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:21.511 [2024-07-22 18:27:33.276935] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:20:21.511 [2024-07-22 18:27:33.276990] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:20:21.511 [2024-07-22 18:27:33.277025] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:20:21.511 [2024-07-22 18:27:33.280126] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:21.511 passed 00:20:21.511 Test: admin_identify_ctrlr_verify_fused ...[2024-07-22 18:27:33.407697] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:21.511 [2024-07-22 18:27:33.410733] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:21.511 passed 00:20:21.773 Test: admin_identify_ns ...[2024-07-22 18:27:33.536597] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:21.773 [2024-07-22 18:27:33.596892] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:20:21.773 [2024-07-22 18:27:33.604883] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:20:21.773 [2024-07-22 18:27:33.626207] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:21.773 passed 00:20:21.773 Test: admin_get_features_mandatory_features ...[2024-07-22 18:27:33.754819] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:21.773 [2024-07-22 18:27:33.757846] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:22.031 passed 00:20:22.031 Test: admin_get_features_optional_features ...[2024-07-22 18:27:33.886407] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:22.031 [2024-07-22 18:27:33.891464] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:22.031 passed 00:20:22.031 Test: admin_set_features_number_of_queues ...[2024-07-22 18:27:34.019384] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:22.290 [2024-07-22 18:27:34.126970] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:22.290 passed 00:20:22.290 Test: admin_get_log_page_mandatory_logs ...[2024-07-22 18:27:34.251711] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:22.290 [2024-07-22 18:27:34.254763] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:22.548 passed 00:20:22.548 Test: admin_get_log_page_with_lpo ...[2024-07-22 18:27:34.381725] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:22.548 [2024-07-22 18:27:34.444923] ctrlr.c:2688:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:20:22.548 [2024-07-22 18:27:34.458101] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:22.548 passed 00:20:22.807 Test: fabric_property_get ...[2024-07-22 18:27:34.577971] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:22.807 [2024-07-22 18:27:34.579608] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:20:22.807 [2024-07-22 18:27:34.584049] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:22.807 passed 00:20:22.807 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-22 18:27:34.708620] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:22.807 [2024-07-22 18:27:34.710326] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:20:22.807 [2024-07-22 18:27:34.711689] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:22.807 passed 00:20:23.066 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-22 18:27:34.840387] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:23.066 [2024-07-22 18:27:34.925895] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:20:23.066 [2024-07-22 18:27:34.941880] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:20:23.066 [2024-07-22 18:27:34.947979] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:23.066 passed 00:20:23.066 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-22 18:27:35.075922] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:23.066 [2024-07-22 18:27:35.077624] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:20:23.066 [2024-07-22 18:27:35.078965] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:23.324 passed 00:20:23.324 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-22 18:27:35.205734] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:23.324 [2024-07-22 18:27:35.280914] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:20:23.324 [2024-07-22 18:27:35.304898] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:20:23.324 [2024-07-22 18:27:35.310970] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:23.583 passed 00:20:23.583 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-22 18:27:35.436450] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:23.583 [2024-07-22 18:27:35.438176] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:20:23.583 [2024-07-22 18:27:35.438356] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:20:23.583 [2024-07-22 18:27:35.441535] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:23.583 passed 00:20:23.583 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-22 18:27:35.569251] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:23.842 [2024-07-22 18:27:35.662901] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:20:23.842 [2024-07-22 18:27:35.670871] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:20:23.842 [2024-07-22 18:27:35.678906] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:20:23.842 [2024-07-22 18:27:35.686853] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:20:23.842 [2024-07-22 18:27:35.716959] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:23.842 passed 00:20:23.842 Test: admin_create_io_sq_verify_pc ...[2024-07-22 18:27:35.841278] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:24.101 [2024-07-22 18:27:35.861005] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:20:24.101 [2024-07-22 18:27:35.885165] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:24.101 passed 00:20:24.101 Test: admin_create_io_qp_max_qps ...[2024-07-22 18:27:36.017967] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:25.109 [2024-07-22 18:27:37.118872] nvme_ctrlr.c:5465:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:20:25.676 [2024-07-22 18:27:37.529280] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:25.676 passed 00:20:25.676 Test: admin_create_io_sq_shared_cq ...[2024-07-22 18:27:37.659314] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:25.935 [2024-07-22 18:27:37.802866] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:20:25.935 [2024-07-22 18:27:37.840067] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:25.935 passed 00:20:25.935 00:20:25.935 Run Summary: Type Total Ran Passed Failed Inactive 00:20:25.935 suites 1 1 n/a 0 0 00:20:25.935 tests 18 18 18 0 0 00:20:25.935 asserts 360 360 360 0 n/a 00:20:25.935 00:20:25.935 Elapsed time = 1.986 seconds 00:20:26.193 18:27:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 85066 00:20:26.193 18:27:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@948 -- # '[' -z 85066 ']' 00:20:26.193 18:27:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # kill -0 85066 00:20:26.193 18:27:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # uname 00:20:26.193 18:27:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:26.193 18:27:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85066 00:20:26.193 18:27:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:26.193 18:27:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:26.194 killing process with pid 85066 00:20:26.194 18:27:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85066' 00:20:26.194 18:27:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@967 -- # kill 85066 00:20:26.194 18:27:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # wait 85066 00:20:27.713 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:20:27.713 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:20:27.713 00:20:27.713 real 0m8.768s 00:20:27.713 user 0m23.528s 00:20:27.713 sys 0m0.871s 00:20:27.713 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:27.713 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:20:27.713 ************************************ 00:20:27.713 END TEST nvmf_vfio_user_nvme_compliance 00:20:27.713 ************************************ 00:20:27.713 18:27:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:20:27.713 18:27:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /home/vagrant/spdk_repo/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:20:27.713 18:27:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:27.713 18:27:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:27.713 18:27:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:27.713 ************************************ 00:20:27.713 START TEST nvmf_vfio_user_fuzz 00:20:27.713 ************************************ 00:20:27.713 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:20:27.713 * Looking for test storage... 00:20:27.713 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:27.713 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:27.713 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:20:27.713 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:27.713 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:27.713 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:27.713 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:27.713 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:27.713 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:27.713 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:27.713 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:27.713 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:27.713 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:27.713 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:20:27.713 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=0b8484e2-e129-4a11-8748-0b3c728771da 00:20:27.713 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:27.713 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:27.713 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:27.713 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:27.713 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:27.713 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:27.713 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:27.713 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:27.713 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:27.713 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:27.713 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:27.713 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:20:27.713 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:27.713 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:20:27.713 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:27.713 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:27.713 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:27.713 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:27.713 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:27.713 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:27.713 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:27.713 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:27.713 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:20:27.713 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:20:27.713 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:20:27.713 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:20:27.713 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:20:27.713 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:20:27.713 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:20:27.713 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=85234 00:20:27.713 Process pid: 85234 00:20:27.713 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 85234' 00:20:27.713 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:20:27.713 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:20:27.713 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 85234 00:20:27.713 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@829 -- # '[' -z 85234 ']' 00:20:27.713 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:27.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:27.713 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:27.713 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:27.713 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:27.713 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:28.684 18:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:28.684 18:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@862 -- # return 0 00:20:28.684 18:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:20:30.062 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:20:30.062 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.062 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:30.062 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:30.062 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:20:30.062 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:20:30.062 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.062 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:30.062 malloc0 00:20:30.062 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:30.062 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:20:30.062 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.062 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:30.062 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:30.062 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:20:30.062 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.062 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:30.062 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:30.062 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:20:30.062 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.062 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:30.062 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:30.062 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:20:30.062 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:20:30.998 Shutting down the fuzz application 00:20:30.998 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:20:30.998 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.998 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:30.998 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:30.998 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 85234 00:20:30.998 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@948 -- # '[' -z 85234 ']' 00:20:30.998 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # kill -0 85234 00:20:30.998 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # uname 00:20:30.998 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:30.998 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85234 00:20:30.998 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:30.998 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:30.998 killing process with pid 85234 00:20:30.998 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85234' 00:20:30.998 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@967 -- # kill 85234 00:20:30.998 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # wait 85234 00:20:32.376 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /home/vagrant/spdk_repo/spdk/../output/vfio_user_fuzz_log.txt /home/vagrant/spdk_repo/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:20:32.376 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:20:32.376 00:20:32.376 real 0m4.705s 00:20:32.376 user 0m5.301s 00:20:32.376 sys 0m0.689s 00:20:32.376 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:32.376 ************************************ 00:20:32.376 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:32.376 END TEST nvmf_vfio_user_fuzz 00:20:32.376 ************************************ 00:20:32.376 18:27:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:20:32.376 18:27:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:20:32.376 18:27:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:32.376 18:27:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:32.376 18:27:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:32.376 ************************************ 00:20:32.376 START TEST nvmf_auth_target 00:20:32.376 ************************************ 00:20:32.376 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:20:32.376 * Looking for test storage... 00:20:32.376 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:32.376 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:32.376 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:20:32.376 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:32.376 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:32.376 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:32.376 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:32.376 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:32.376 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:32.376 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:32.376 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:32.376 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:32.376 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:32.376 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:20:32.376 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=0b8484e2-e129-4a11-8748-0b3c728771da 00:20:32.376 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:32.376 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:32.376 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:32.376 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:32.376 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:32.376 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:32.376 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:32.376 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:32.376 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:32.376 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:32.376 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:32.376 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:20:32.376 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:32.376 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:20:32.376 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:32.376 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:32.376 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:32.376 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:32.376 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:32.376 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:32.376 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:32.376 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:32.376 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:20:32.376 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:20:32.376 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:20:32.376 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:20:32.376 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:20:32.376 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:20:32.376 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:20:32.376 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:20:32.376 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:32.376 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:32.376 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:32.376 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:32.376 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:32.376 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:32.376 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:32.376 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:32.376 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:20:32.376 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:20:32.376 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:20:32.376 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:20:32.376 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:20:32.376 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:20:32.376 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:32.376 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:32.376 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:32.376 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:32.376 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:32.376 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:32.376 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:32.376 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:32.376 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:32.376 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:32.377 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:32.377 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:32.377 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:32.377 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:32.377 Cannot find device "nvmf_tgt_br" 00:20:32.377 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@155 -- # true 00:20:32.377 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:32.377 Cannot find device "nvmf_tgt_br2" 00:20:32.636 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # true 00:20:32.636 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:32.636 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:32.636 Cannot find device "nvmf_tgt_br" 00:20:32.636 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@158 -- # true 00:20:32.636 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:32.636 Cannot find device "nvmf_tgt_br2" 00:20:32.636 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # true 00:20:32.636 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:32.636 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:32.636 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:32.636 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:32.636 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:20:32.636 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:32.636 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:32.636 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:20:32.636 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:32.636 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:32.636 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:32.636 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:32.636 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:32.636 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:32.636 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:32.636 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:32.636 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:32.636 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:32.636 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:32.636 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:32.636 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:32.636 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:32.636 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:32.636 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:32.636 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:32.636 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:32.636 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:32.636 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:32.636 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:32.636 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:32.895 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:32.895 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:32.895 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:32.895 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.108 ms 00:20:32.895 00:20:32.895 --- 10.0.0.2 ping statistics --- 00:20:32.895 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:32.895 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:20:32.895 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:32.895 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:32.895 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.147 ms 00:20:32.895 00:20:32.895 --- 10.0.0.3 ping statistics --- 00:20:32.895 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:32.895 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:20:32.895 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:32.895 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:32.895 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.050 ms 00:20:32.895 00:20:32.895 --- 10.0.0.1 ping statistics --- 00:20:32.895 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:32.895 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:20:32.895 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:32.895 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@433 -- # return 0 00:20:32.895 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:32.895 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:32.895 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:32.895 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:32.895 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:32.895 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:32.895 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:32.895 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:20:32.895 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:32.895 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:32.895 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.895 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=85479 00:20:32.895 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:20:32.895 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 85479 00:20:32.895 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 85479 ']' 00:20:32.895 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:32.895 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:32.895 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:32.895 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:32.895 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.830 18:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:33.830 18:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:20:33.830 18:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:33.830 18:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:33.830 18:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.830 18:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:33.830 18:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=85523 00:20:33.830 18:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:20:33.830 18:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:20:33.830 18:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:20:33.830 18:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:20:33.830 18:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:33.830 18:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:20:33.830 18:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:20:33.830 18:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:20:33.830 18:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:33.830 18:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=904efeaa86ede267a3e574cd9c1ebc408eac6ae100d4a362 00:20:33.830 18:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:20:33.830 18:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.AxW 00:20:33.830 18:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 904efeaa86ede267a3e574cd9c1ebc408eac6ae100d4a362 0 00:20:33.830 18:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 904efeaa86ede267a3e574cd9c1ebc408eac6ae100d4a362 0 00:20:33.830 18:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:20:33.830 18:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:20:33.830 18:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=904efeaa86ede267a3e574cd9c1ebc408eac6ae100d4a362 00:20:33.830 18:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:20:33.830 18:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:20:34.089 18:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.AxW 00:20:34.089 18:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.AxW 00:20:34.089 18:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.AxW 00:20:34.089 18:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:20:34.089 18:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:20:34.089 18:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:34.090 18:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:20:34.090 18:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:20:34.090 18:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:20:34.090 18:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:20:34.090 18:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=90c0a4be647cf630544e4318cac1401a6582cde0de3a01a13447f67c87f0dcf5 00:20:34.090 18:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:20:34.090 18:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.1lW 00:20:34.090 18:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 90c0a4be647cf630544e4318cac1401a6582cde0de3a01a13447f67c87f0dcf5 3 00:20:34.090 18:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 90c0a4be647cf630544e4318cac1401a6582cde0de3a01a13447f67c87f0dcf5 3 00:20:34.090 18:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:20:34.090 18:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:20:34.090 18:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=90c0a4be647cf630544e4318cac1401a6582cde0de3a01a13447f67c87f0dcf5 00:20:34.090 18:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:20:34.090 18:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:20:34.090 18:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.1lW 00:20:34.090 18:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.1lW 00:20:34.090 18:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.1lW 00:20:34.090 18:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:20:34.090 18:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:20:34.090 18:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:34.090 18:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:20:34.090 18:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:20:34.090 18:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:20:34.090 18:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:34.090 18:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=da45945a26239e58194070440644a8ef 00:20:34.090 18:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:20:34.090 18:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.sBj 00:20:34.090 18:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key da45945a26239e58194070440644a8ef 1 00:20:34.090 18:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 da45945a26239e58194070440644a8ef 1 00:20:34.090 18:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:20:34.090 18:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:20:34.090 18:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=da45945a26239e58194070440644a8ef 00:20:34.090 18:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:20:34.090 18:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:20:34.090 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.sBj 00:20:34.090 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.sBj 00:20:34.090 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.sBj 00:20:34.090 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:20:34.090 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:20:34.090 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:34.090 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:20:34.090 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:20:34.090 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:20:34.090 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:34.090 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=d3f75f6a6f0debf2bb9f05ce85105f0ae0756d09de3c4f19 00:20:34.090 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:20:34.090 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.CU3 00:20:34.090 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key d3f75f6a6f0debf2bb9f05ce85105f0ae0756d09de3c4f19 2 00:20:34.090 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 d3f75f6a6f0debf2bb9f05ce85105f0ae0756d09de3c4f19 2 00:20:34.090 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:20:34.090 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:20:34.090 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=d3f75f6a6f0debf2bb9f05ce85105f0ae0756d09de3c4f19 00:20:34.090 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:20:34.090 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:20:34.090 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.CU3 00:20:34.090 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.CU3 00:20:34.090 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.CU3 00:20:34.090 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:20:34.090 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:20:34.090 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:34.090 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:20:34.090 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:20:34.090 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:20:34.090 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:34.090 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=0f3787f39f796983b80a4aa18f1a5dbe14684dcef46fd47e 00:20:34.090 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:20:34.090 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.ZBq 00:20:34.090 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 0f3787f39f796983b80a4aa18f1a5dbe14684dcef46fd47e 2 00:20:34.090 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 0f3787f39f796983b80a4aa18f1a5dbe14684dcef46fd47e 2 00:20:34.090 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:20:34.090 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:20:34.090 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=0f3787f39f796983b80a4aa18f1a5dbe14684dcef46fd47e 00:20:34.090 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:20:34.090 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:20:34.349 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.ZBq 00:20:34.349 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.ZBq 00:20:34.349 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.ZBq 00:20:34.349 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:20:34.349 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:20:34.349 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:34.349 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:20:34.349 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:20:34.349 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:20:34.349 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:34.349 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=484f4f27e89b81913429b4fb380d6dd7 00:20:34.349 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:20:34.349 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.xu6 00:20:34.349 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 484f4f27e89b81913429b4fb380d6dd7 1 00:20:34.349 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 484f4f27e89b81913429b4fb380d6dd7 1 00:20:34.349 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:20:34.349 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:20:34.349 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=484f4f27e89b81913429b4fb380d6dd7 00:20:34.349 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:20:34.349 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:20:34.349 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.xu6 00:20:34.349 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.xu6 00:20:34.349 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.xu6 00:20:34.349 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:20:34.349 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:20:34.349 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:34.349 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:20:34.349 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:20:34.349 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:20:34.349 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:20:34.349 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=11a98690c57ecd2f6cc70d45abad3851b441ce59d305f5b12beb9f91f4c754e0 00:20:34.349 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:20:34.349 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.fNv 00:20:34.349 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 11a98690c57ecd2f6cc70d45abad3851b441ce59d305f5b12beb9f91f4c754e0 3 00:20:34.349 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 11a98690c57ecd2f6cc70d45abad3851b441ce59d305f5b12beb9f91f4c754e0 3 00:20:34.349 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:20:34.349 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:20:34.349 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=11a98690c57ecd2f6cc70d45abad3851b441ce59d305f5b12beb9f91f4c754e0 00:20:34.349 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:20:34.349 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:20:34.349 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.fNv 00:20:34.350 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.fNv 00:20:34.350 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:34.350 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.fNv 00:20:34.350 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:20:34.350 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 85479 00:20:34.350 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 85479 ']' 00:20:34.350 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:34.350 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:34.350 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:34.350 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:34.350 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:20:34.609 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:34.609 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:20:34.610 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 85523 /var/tmp/host.sock 00:20:34.610 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 85523 ']' 00:20:34.610 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:20:34.610 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:34.610 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:20:34.610 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:34.610 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.556 18:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:35.556 18:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:20:35.556 18:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:20:35.556 18:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.556 18:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.556 18:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.556 18:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:20:35.556 18:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.AxW 00:20:35.556 18:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.556 18:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.556 18:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.556 18:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.AxW 00:20:35.556 18:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.AxW 00:20:35.556 18:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.1lW ]] 00:20:35.556 18:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.1lW 00:20:35.556 18:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.556 18:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.556 18:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.556 18:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.1lW 00:20:35.556 18:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.1lW 00:20:35.814 18:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:20:35.814 18:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.sBj 00:20:35.814 18:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.814 18:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.814 18:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.814 18:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.sBj 00:20:35.814 18:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.sBj 00:20:36.380 18:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.CU3 ]] 00:20:36.380 18:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.CU3 00:20:36.380 18:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.380 18:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.380 18:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.380 18:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.CU3 00:20:36.380 18:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.CU3 00:20:36.639 18:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:20:36.639 18:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.ZBq 00:20:36.639 18:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.639 18:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.639 18:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.639 18:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.ZBq 00:20:36.639 18:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.ZBq 00:20:36.897 18:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.xu6 ]] 00:20:36.897 18:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.xu6 00:20:36.897 18:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.897 18:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.897 18:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.897 18:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.xu6 00:20:36.897 18:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.xu6 00:20:37.156 18:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:20:37.156 18:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.fNv 00:20:37.156 18:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.156 18:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.156 18:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.156 18:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.fNv 00:20:37.156 18:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.fNv 00:20:37.413 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:20:37.413 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:20:37.413 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:37.413 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:37.413 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:37.413 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:37.672 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:20:37.672 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:37.672 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:37.672 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:37.672 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:37.672 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:37.672 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:37.672 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.672 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.672 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.672 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:37.672 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:37.930 00:20:37.930 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:37.930 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:37.930 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:38.496 18:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:38.496 18:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:38.496 18:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.496 18:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.496 18:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.496 18:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:38.496 { 00:20:38.496 "auth": { 00:20:38.496 "dhgroup": "null", 00:20:38.496 "digest": "sha256", 00:20:38.496 "state": "completed" 00:20:38.496 }, 00:20:38.496 "cntlid": 1, 00:20:38.496 "listen_address": { 00:20:38.496 "adrfam": "IPv4", 00:20:38.496 "traddr": "10.0.0.2", 00:20:38.496 "trsvcid": "4420", 00:20:38.496 "trtype": "TCP" 00:20:38.496 }, 00:20:38.496 "peer_address": { 00:20:38.496 "adrfam": "IPv4", 00:20:38.496 "traddr": "10.0.0.1", 00:20:38.496 "trsvcid": "43916", 00:20:38.496 "trtype": "TCP" 00:20:38.496 }, 00:20:38.496 "qid": 0, 00:20:38.496 "state": "enabled", 00:20:38.496 "thread": "nvmf_tgt_poll_group_000" 00:20:38.496 } 00:20:38.496 ]' 00:20:38.496 18:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:38.496 18:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:38.496 18:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:38.496 18:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:38.496 18:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:38.496 18:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:38.496 18:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:38.496 18:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:38.756 18:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --hostid 0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-secret DHHC-1:00:OTA0ZWZlYWE4NmVkZTI2N2EzZTU3NGNkOWMxZWJjNDA4ZWFjNmFlMTAwZDRhMzYy01x3lw==: --dhchap-ctrl-secret DHHC-1:03:OTBjMGE0YmU2NDdjZjYzMDU0NGU0MzE4Y2FjMTQwMWE2NTgyY2RlMGRlM2EwMWExMzQ0N2Y2N2M4N2YwZGNmNWRVl8M=: 00:20:44.023 18:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:44.023 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:44.023 18:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:20:44.023 18:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.023 18:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.023 18:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.023 18:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:44.023 18:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:44.023 18:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:44.023 18:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:20:44.023 18:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:44.023 18:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:44.023 18:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:44.023 18:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:44.023 18:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:44.023 18:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:44.023 18:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.023 18:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.023 18:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.023 18:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:44.023 18:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:44.023 00:20:44.023 18:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:44.023 18:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:44.023 18:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:44.023 18:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:44.023 18:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:44.023 18:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.023 18:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.023 18:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.023 18:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:44.023 { 00:20:44.023 "auth": { 00:20:44.023 "dhgroup": "null", 00:20:44.023 "digest": "sha256", 00:20:44.023 "state": "completed" 00:20:44.023 }, 00:20:44.023 "cntlid": 3, 00:20:44.023 "listen_address": { 00:20:44.023 "adrfam": "IPv4", 00:20:44.023 "traddr": "10.0.0.2", 00:20:44.023 "trsvcid": "4420", 00:20:44.023 "trtype": "TCP" 00:20:44.023 }, 00:20:44.023 "peer_address": { 00:20:44.023 "adrfam": "IPv4", 00:20:44.023 "traddr": "10.0.0.1", 00:20:44.023 "trsvcid": "46896", 00:20:44.023 "trtype": "TCP" 00:20:44.023 }, 00:20:44.023 "qid": 0, 00:20:44.023 "state": "enabled", 00:20:44.023 "thread": "nvmf_tgt_poll_group_000" 00:20:44.023 } 00:20:44.023 ]' 00:20:44.023 18:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:44.023 18:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:44.023 18:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:44.023 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:44.023 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:44.281 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:44.281 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:44.281 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:44.540 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --hostid 0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-secret DHHC-1:01:ZGE0NTk0NWEyNjIzOWU1ODE5NDA3MDQ0MDY0NGE4ZWZdneta: --dhchap-ctrl-secret DHHC-1:02:ZDNmNzVmNmE2ZjBkZWJmMmJiOWYwNWNlODUxMDVmMGFlMDc1NmQwOWRlM2M0ZjE5utXpNg==: 00:20:45.107 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:45.107 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:45.107 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:20:45.107 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.107 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.107 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.107 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:45.107 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:45.108 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:45.366 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:20:45.366 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:45.366 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:45.366 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:45.366 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:45.366 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:45.366 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:45.366 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.366 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.366 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.366 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:45.366 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:45.625 00:20:45.883 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:45.883 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:45.883 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:46.142 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:46.142 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:46.142 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.142 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.142 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.142 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:46.142 { 00:20:46.142 "auth": { 00:20:46.142 "dhgroup": "null", 00:20:46.142 "digest": "sha256", 00:20:46.142 "state": "completed" 00:20:46.142 }, 00:20:46.142 "cntlid": 5, 00:20:46.142 "listen_address": { 00:20:46.142 "adrfam": "IPv4", 00:20:46.142 "traddr": "10.0.0.2", 00:20:46.142 "trsvcid": "4420", 00:20:46.142 "trtype": "TCP" 00:20:46.142 }, 00:20:46.142 "peer_address": { 00:20:46.142 "adrfam": "IPv4", 00:20:46.142 "traddr": "10.0.0.1", 00:20:46.142 "trsvcid": "46924", 00:20:46.142 "trtype": "TCP" 00:20:46.142 }, 00:20:46.142 "qid": 0, 00:20:46.142 "state": "enabled", 00:20:46.142 "thread": "nvmf_tgt_poll_group_000" 00:20:46.142 } 00:20:46.142 ]' 00:20:46.142 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:46.142 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:46.142 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:46.142 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:46.142 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:46.142 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:46.142 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:46.142 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:46.401 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --hostid 0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-secret DHHC-1:02:MGYzNzg3ZjM5Zjc5Njk4M2I4MGE0YWExOGYxYTVkYmUxNDY4NGRjZWY0NmZkNDdlNqeI4Q==: --dhchap-ctrl-secret DHHC-1:01:NDg0ZjRmMjdlODliODE5MTM0MjliNGZiMzgwZDZkZDfpWGLp: 00:20:47.336 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:47.336 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:47.336 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:20:47.336 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.336 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.336 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.336 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:47.336 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:47.336 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:47.594 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:20:47.594 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:47.594 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:47.594 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:47.594 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:47.594 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:47.594 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-key key3 00:20:47.594 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.594 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.594 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.594 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:47.594 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:47.852 00:20:47.852 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:47.853 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:47.853 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:48.111 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:48.111 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:48.111 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.111 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.111 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.111 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:48.111 { 00:20:48.111 "auth": { 00:20:48.111 "dhgroup": "null", 00:20:48.111 "digest": "sha256", 00:20:48.111 "state": "completed" 00:20:48.111 }, 00:20:48.111 "cntlid": 7, 00:20:48.111 "listen_address": { 00:20:48.111 "adrfam": "IPv4", 00:20:48.111 "traddr": "10.0.0.2", 00:20:48.111 "trsvcid": "4420", 00:20:48.111 "trtype": "TCP" 00:20:48.111 }, 00:20:48.111 "peer_address": { 00:20:48.111 "adrfam": "IPv4", 00:20:48.111 "traddr": "10.0.0.1", 00:20:48.111 "trsvcid": "46946", 00:20:48.111 "trtype": "TCP" 00:20:48.111 }, 00:20:48.111 "qid": 0, 00:20:48.111 "state": "enabled", 00:20:48.111 "thread": "nvmf_tgt_poll_group_000" 00:20:48.111 } 00:20:48.111 ]' 00:20:48.111 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:48.111 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:48.111 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:48.111 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:48.111 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:48.111 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:48.111 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:48.111 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:48.678 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --hostid 0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-secret DHHC-1:03:MTFhOTg2OTBjNTdlY2QyZjZjYzcwZDQ1YWJhZDM4NTFiNDQxY2U1OWQzMDVmNWIxMmJlYjlmOTFmNGM3NTRlMB/K07c=: 00:20:49.245 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:49.245 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:49.245 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:20:49.245 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.245 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.245 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.245 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:49.245 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:49.245 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:49.245 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:49.504 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:20:49.504 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:49.504 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:49.504 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:49.504 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:49.504 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:49.504 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:49.504 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.504 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.504 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.504 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:49.504 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:50.070 00:20:50.070 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:50.070 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:50.070 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:50.329 18:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:50.329 18:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:50.329 18:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.329 18:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.329 18:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.329 18:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:50.329 { 00:20:50.329 "auth": { 00:20:50.329 "dhgroup": "ffdhe2048", 00:20:50.329 "digest": "sha256", 00:20:50.329 "state": "completed" 00:20:50.329 }, 00:20:50.329 "cntlid": 9, 00:20:50.329 "listen_address": { 00:20:50.329 "adrfam": "IPv4", 00:20:50.329 "traddr": "10.0.0.2", 00:20:50.329 "trsvcid": "4420", 00:20:50.329 "trtype": "TCP" 00:20:50.329 }, 00:20:50.329 "peer_address": { 00:20:50.329 "adrfam": "IPv4", 00:20:50.329 "traddr": "10.0.0.1", 00:20:50.329 "trsvcid": "46978", 00:20:50.329 "trtype": "TCP" 00:20:50.329 }, 00:20:50.329 "qid": 0, 00:20:50.329 "state": "enabled", 00:20:50.329 "thread": "nvmf_tgt_poll_group_000" 00:20:50.329 } 00:20:50.329 ]' 00:20:50.329 18:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:50.329 18:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:50.329 18:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:50.329 18:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:50.329 18:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:50.329 18:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:50.329 18:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:50.329 18:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:50.895 18:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --hostid 0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-secret DHHC-1:00:OTA0ZWZlYWE4NmVkZTI2N2EzZTU3NGNkOWMxZWJjNDA4ZWFjNmFlMTAwZDRhMzYy01x3lw==: --dhchap-ctrl-secret DHHC-1:03:OTBjMGE0YmU2NDdjZjYzMDU0NGU0MzE4Y2FjMTQwMWE2NTgyY2RlMGRlM2EwMWExMzQ0N2Y2N2M4N2YwZGNmNWRVl8M=: 00:20:51.467 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:51.467 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:51.467 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:20:51.467 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.467 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.467 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.467 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:51.467 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:51.467 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:51.726 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:20:51.726 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:51.726 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:51.726 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:51.726 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:51.726 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:51.726 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:51.726 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.726 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.726 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.726 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:51.726 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:52.033 00:20:52.033 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:52.033 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:52.033 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:52.308 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:52.308 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:52.308 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:52.308 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.308 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:52.308 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:52.308 { 00:20:52.308 "auth": { 00:20:52.308 "dhgroup": "ffdhe2048", 00:20:52.308 "digest": "sha256", 00:20:52.308 "state": "completed" 00:20:52.308 }, 00:20:52.308 "cntlid": 11, 00:20:52.308 "listen_address": { 00:20:52.308 "adrfam": "IPv4", 00:20:52.308 "traddr": "10.0.0.2", 00:20:52.308 "trsvcid": "4420", 00:20:52.308 "trtype": "TCP" 00:20:52.308 }, 00:20:52.308 "peer_address": { 00:20:52.308 "adrfam": "IPv4", 00:20:52.308 "traddr": "10.0.0.1", 00:20:52.308 "trsvcid": "45268", 00:20:52.308 "trtype": "TCP" 00:20:52.308 }, 00:20:52.308 "qid": 0, 00:20:52.308 "state": "enabled", 00:20:52.308 "thread": "nvmf_tgt_poll_group_000" 00:20:52.308 } 00:20:52.308 ]' 00:20:52.308 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:52.308 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:52.308 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:52.308 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:52.308 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:52.584 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:52.584 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:52.584 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:52.584 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --hostid 0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-secret DHHC-1:01:ZGE0NTk0NWEyNjIzOWU1ODE5NDA3MDQ0MDY0NGE4ZWZdneta: --dhchap-ctrl-secret DHHC-1:02:ZDNmNzVmNmE2ZjBkZWJmMmJiOWYwNWNlODUxMDVmMGFlMDc1NmQwOWRlM2M0ZjE5utXpNg==: 00:20:53.520 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:53.520 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:53.520 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:20:53.520 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.520 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.520 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.520 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:53.520 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:53.520 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:53.778 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:20:53.778 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:53.778 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:53.778 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:53.778 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:53.778 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:53.778 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:53.779 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.779 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.779 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.779 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:53.779 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:54.037 00:20:54.037 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:54.037 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:54.037 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:54.295 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:54.295 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:54.295 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.295 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.295 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.295 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:54.295 { 00:20:54.295 "auth": { 00:20:54.295 "dhgroup": "ffdhe2048", 00:20:54.295 "digest": "sha256", 00:20:54.295 "state": "completed" 00:20:54.295 }, 00:20:54.295 "cntlid": 13, 00:20:54.295 "listen_address": { 00:20:54.295 "adrfam": "IPv4", 00:20:54.295 "traddr": "10.0.0.2", 00:20:54.295 "trsvcid": "4420", 00:20:54.295 "trtype": "TCP" 00:20:54.295 }, 00:20:54.295 "peer_address": { 00:20:54.295 "adrfam": "IPv4", 00:20:54.295 "traddr": "10.0.0.1", 00:20:54.295 "trsvcid": "45306", 00:20:54.295 "trtype": "TCP" 00:20:54.295 }, 00:20:54.295 "qid": 0, 00:20:54.295 "state": "enabled", 00:20:54.295 "thread": "nvmf_tgt_poll_group_000" 00:20:54.295 } 00:20:54.295 ]' 00:20:54.295 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:54.295 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:54.295 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:54.554 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:54.554 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:54.554 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:54.554 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:54.554 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:54.864 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --hostid 0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-secret DHHC-1:02:MGYzNzg3ZjM5Zjc5Njk4M2I4MGE0YWExOGYxYTVkYmUxNDY4NGRjZWY0NmZkNDdlNqeI4Q==: --dhchap-ctrl-secret DHHC-1:01:NDg0ZjRmMjdlODliODE5MTM0MjliNGZiMzgwZDZkZDfpWGLp: 00:20:55.430 18:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:55.430 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:55.430 18:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:20:55.430 18:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.430 18:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.430 18:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.430 18:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:55.430 18:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:55.430 18:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:55.996 18:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:20:55.996 18:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:55.996 18:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:55.996 18:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:55.996 18:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:55.996 18:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:55.996 18:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-key key3 00:20:55.996 18:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.996 18:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.996 18:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.996 18:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:55.996 18:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:56.254 00:20:56.254 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:56.254 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:56.254 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:56.512 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:56.512 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:56.512 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.512 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.512 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.512 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:56.512 { 00:20:56.512 "auth": { 00:20:56.512 "dhgroup": "ffdhe2048", 00:20:56.512 "digest": "sha256", 00:20:56.512 "state": "completed" 00:20:56.512 }, 00:20:56.512 "cntlid": 15, 00:20:56.512 "listen_address": { 00:20:56.512 "adrfam": "IPv4", 00:20:56.512 "traddr": "10.0.0.2", 00:20:56.512 "trsvcid": "4420", 00:20:56.512 "trtype": "TCP" 00:20:56.512 }, 00:20:56.512 "peer_address": { 00:20:56.512 "adrfam": "IPv4", 00:20:56.512 "traddr": "10.0.0.1", 00:20:56.512 "trsvcid": "45324", 00:20:56.512 "trtype": "TCP" 00:20:56.512 }, 00:20:56.512 "qid": 0, 00:20:56.512 "state": "enabled", 00:20:56.512 "thread": "nvmf_tgt_poll_group_000" 00:20:56.512 } 00:20:56.512 ]' 00:20:56.512 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:56.512 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:56.512 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:56.770 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:56.770 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:56.770 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:56.770 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:56.770 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:57.028 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --hostid 0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-secret DHHC-1:03:MTFhOTg2OTBjNTdlY2QyZjZjYzcwZDQ1YWJhZDM4NTFiNDQxY2U1OWQzMDVmNWIxMmJlYjlmOTFmNGM3NTRlMB/K07c=: 00:20:57.594 18:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:57.594 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:57.594 18:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:20:57.594 18:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.594 18:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.594 18:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.594 18:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:57.594 18:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:57.594 18:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:57.594 18:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:57.852 18:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:20:57.852 18:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:57.852 18:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:57.853 18:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:57.853 18:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:57.853 18:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:57.853 18:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:57.853 18:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.853 18:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.853 18:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.853 18:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:57.853 18:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:58.420 00:20:58.420 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:58.420 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:58.420 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:58.680 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:58.680 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:58.680 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.680 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.680 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.680 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:58.680 { 00:20:58.680 "auth": { 00:20:58.680 "dhgroup": "ffdhe3072", 00:20:58.680 "digest": "sha256", 00:20:58.680 "state": "completed" 00:20:58.680 }, 00:20:58.680 "cntlid": 17, 00:20:58.680 "listen_address": { 00:20:58.680 "adrfam": "IPv4", 00:20:58.680 "traddr": "10.0.0.2", 00:20:58.680 "trsvcid": "4420", 00:20:58.680 "trtype": "TCP" 00:20:58.680 }, 00:20:58.680 "peer_address": { 00:20:58.680 "adrfam": "IPv4", 00:20:58.680 "traddr": "10.0.0.1", 00:20:58.680 "trsvcid": "45356", 00:20:58.680 "trtype": "TCP" 00:20:58.680 }, 00:20:58.680 "qid": 0, 00:20:58.680 "state": "enabled", 00:20:58.680 "thread": "nvmf_tgt_poll_group_000" 00:20:58.680 } 00:20:58.680 ]' 00:20:58.680 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:58.680 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:58.680 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:58.680 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:58.680 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:58.680 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:58.680 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:58.680 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:59.252 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --hostid 0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-secret DHHC-1:00:OTA0ZWZlYWE4NmVkZTI2N2EzZTU3NGNkOWMxZWJjNDA4ZWFjNmFlMTAwZDRhMzYy01x3lw==: --dhchap-ctrl-secret DHHC-1:03:OTBjMGE0YmU2NDdjZjYzMDU0NGU0MzE4Y2FjMTQwMWE2NTgyY2RlMGRlM2EwMWExMzQ0N2Y2N2M4N2YwZGNmNWRVl8M=: 00:20:59.818 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:59.818 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:59.818 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:20:59.818 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:59.818 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.818 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:59.818 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:59.818 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:59.818 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:00.077 18:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:21:00.077 18:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:00.077 18:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:00.077 18:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:00.077 18:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:00.077 18:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:00.077 18:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:00.077 18:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:00.077 18:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.077 18:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:00.077 18:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:00.077 18:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:00.336 00:21:00.594 18:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:00.594 18:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:00.594 18:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:00.853 18:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:00.853 18:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:00.853 18:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:00.853 18:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.853 18:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:00.853 18:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:00.853 { 00:21:00.853 "auth": { 00:21:00.853 "dhgroup": "ffdhe3072", 00:21:00.853 "digest": "sha256", 00:21:00.853 "state": "completed" 00:21:00.853 }, 00:21:00.853 "cntlid": 19, 00:21:00.853 "listen_address": { 00:21:00.853 "adrfam": "IPv4", 00:21:00.853 "traddr": "10.0.0.2", 00:21:00.853 "trsvcid": "4420", 00:21:00.853 "trtype": "TCP" 00:21:00.853 }, 00:21:00.853 "peer_address": { 00:21:00.853 "adrfam": "IPv4", 00:21:00.853 "traddr": "10.0.0.1", 00:21:00.853 "trsvcid": "45390", 00:21:00.853 "trtype": "TCP" 00:21:00.853 }, 00:21:00.853 "qid": 0, 00:21:00.853 "state": "enabled", 00:21:00.853 "thread": "nvmf_tgt_poll_group_000" 00:21:00.853 } 00:21:00.853 ]' 00:21:00.853 18:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:00.853 18:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:00.853 18:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:00.853 18:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:00.853 18:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:00.853 18:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:00.853 18:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:00.853 18:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:01.112 18:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --hostid 0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-secret DHHC-1:01:ZGE0NTk0NWEyNjIzOWU1ODE5NDA3MDQ0MDY0NGE4ZWZdneta: --dhchap-ctrl-secret DHHC-1:02:ZDNmNzVmNmE2ZjBkZWJmMmJiOWYwNWNlODUxMDVmMGFlMDc1NmQwOWRlM2M0ZjE5utXpNg==: 00:21:02.051 18:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:02.051 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:02.051 18:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:21:02.051 18:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.051 18:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.051 18:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.051 18:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:02.051 18:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:02.051 18:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:02.310 18:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:21:02.310 18:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:02.310 18:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:02.310 18:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:02.310 18:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:02.310 18:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:02.310 18:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:02.310 18:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.310 18:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.310 18:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.310 18:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:02.310 18:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:02.568 00:21:02.568 18:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:02.568 18:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:02.568 18:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:02.826 18:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:02.826 18:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:02.826 18:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.826 18:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.085 18:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:03.085 18:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:03.085 { 00:21:03.085 "auth": { 00:21:03.085 "dhgroup": "ffdhe3072", 00:21:03.085 "digest": "sha256", 00:21:03.085 "state": "completed" 00:21:03.085 }, 00:21:03.085 "cntlid": 21, 00:21:03.085 "listen_address": { 00:21:03.085 "adrfam": "IPv4", 00:21:03.085 "traddr": "10.0.0.2", 00:21:03.085 "trsvcid": "4420", 00:21:03.085 "trtype": "TCP" 00:21:03.085 }, 00:21:03.085 "peer_address": { 00:21:03.085 "adrfam": "IPv4", 00:21:03.085 "traddr": "10.0.0.1", 00:21:03.085 "trsvcid": "46988", 00:21:03.085 "trtype": "TCP" 00:21:03.085 }, 00:21:03.085 "qid": 0, 00:21:03.085 "state": "enabled", 00:21:03.085 "thread": "nvmf_tgt_poll_group_000" 00:21:03.085 } 00:21:03.085 ]' 00:21:03.085 18:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:03.085 18:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:03.085 18:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:03.085 18:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:03.085 18:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:03.085 18:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:03.085 18:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:03.085 18:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:03.343 18:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --hostid 0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-secret DHHC-1:02:MGYzNzg3ZjM5Zjc5Njk4M2I4MGE0YWExOGYxYTVkYmUxNDY4NGRjZWY0NmZkNDdlNqeI4Q==: --dhchap-ctrl-secret DHHC-1:01:NDg0ZjRmMjdlODliODE5MTM0MjliNGZiMzgwZDZkZDfpWGLp: 00:21:03.910 18:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:04.169 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:04.169 18:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:21:04.169 18:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.169 18:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.169 18:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.169 18:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:04.169 18:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:04.169 18:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:04.428 18:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:21:04.428 18:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:04.428 18:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:04.428 18:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:04.428 18:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:04.428 18:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:04.428 18:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-key key3 00:21:04.428 18:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.428 18:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.428 18:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.428 18:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:04.428 18:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:04.687 00:21:04.687 18:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:04.687 18:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:04.687 18:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:04.945 18:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:04.945 18:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:04.945 18:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.945 18:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.945 18:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.945 18:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:04.945 { 00:21:04.945 "auth": { 00:21:04.945 "dhgroup": "ffdhe3072", 00:21:04.945 "digest": "sha256", 00:21:04.945 "state": "completed" 00:21:04.945 }, 00:21:04.945 "cntlid": 23, 00:21:04.945 "listen_address": { 00:21:04.945 "adrfam": "IPv4", 00:21:04.945 "traddr": "10.0.0.2", 00:21:04.945 "trsvcid": "4420", 00:21:04.945 "trtype": "TCP" 00:21:04.945 }, 00:21:04.945 "peer_address": { 00:21:04.945 "adrfam": "IPv4", 00:21:04.945 "traddr": "10.0.0.1", 00:21:04.945 "trsvcid": "47012", 00:21:04.945 "trtype": "TCP" 00:21:04.945 }, 00:21:04.945 "qid": 0, 00:21:04.945 "state": "enabled", 00:21:04.945 "thread": "nvmf_tgt_poll_group_000" 00:21:04.945 } 00:21:04.945 ]' 00:21:04.945 18:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:04.945 18:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:04.945 18:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:05.204 18:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:05.204 18:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:05.204 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:05.204 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:05.204 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:05.462 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --hostid 0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-secret DHHC-1:03:MTFhOTg2OTBjNTdlY2QyZjZjYzcwZDQ1YWJhZDM4NTFiNDQxY2U1OWQzMDVmNWIxMmJlYjlmOTFmNGM3NTRlMB/K07c=: 00:21:06.029 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:06.029 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:06.029 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:21:06.029 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.029 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.029 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.029 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:06.029 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:06.029 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:06.029 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:06.288 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:21:06.288 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:06.288 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:06.288 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:06.288 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:06.288 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:06.288 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:06.288 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.288 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.546 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.546 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:06.546 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:06.860 00:21:06.860 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:06.860 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:06.860 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:07.118 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:07.118 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:07.118 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:07.118 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.118 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:07.118 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:07.118 { 00:21:07.118 "auth": { 00:21:07.118 "dhgroup": "ffdhe4096", 00:21:07.119 "digest": "sha256", 00:21:07.119 "state": "completed" 00:21:07.119 }, 00:21:07.119 "cntlid": 25, 00:21:07.119 "listen_address": { 00:21:07.119 "adrfam": "IPv4", 00:21:07.119 "traddr": "10.0.0.2", 00:21:07.119 "trsvcid": "4420", 00:21:07.119 "trtype": "TCP" 00:21:07.119 }, 00:21:07.119 "peer_address": { 00:21:07.119 "adrfam": "IPv4", 00:21:07.119 "traddr": "10.0.0.1", 00:21:07.119 "trsvcid": "47036", 00:21:07.119 "trtype": "TCP" 00:21:07.119 }, 00:21:07.119 "qid": 0, 00:21:07.119 "state": "enabled", 00:21:07.119 "thread": "nvmf_tgt_poll_group_000" 00:21:07.119 } 00:21:07.119 ]' 00:21:07.119 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:07.119 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:07.119 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:07.377 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:07.377 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:07.377 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:07.377 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:07.377 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:07.636 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --hostid 0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-secret DHHC-1:00:OTA0ZWZlYWE4NmVkZTI2N2EzZTU3NGNkOWMxZWJjNDA4ZWFjNmFlMTAwZDRhMzYy01x3lw==: --dhchap-ctrl-secret DHHC-1:03:OTBjMGE0YmU2NDdjZjYzMDU0NGU0MzE4Y2FjMTQwMWE2NTgyY2RlMGRlM2EwMWExMzQ0N2Y2N2M4N2YwZGNmNWRVl8M=: 00:21:08.203 18:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:08.203 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:08.203 18:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:21:08.203 18:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:08.203 18:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.203 18:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:08.203 18:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:08.203 18:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:08.203 18:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:08.768 18:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:21:08.768 18:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:08.768 18:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:08.768 18:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:08.768 18:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:08.768 18:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:08.768 18:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:08.768 18:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:08.768 18:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.768 18:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:08.768 18:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:08.768 18:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:09.026 00:21:09.026 18:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:09.026 18:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:09.026 18:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:09.284 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:09.284 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:09.284 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:09.284 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.284 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:09.284 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:09.284 { 00:21:09.284 "auth": { 00:21:09.284 "dhgroup": "ffdhe4096", 00:21:09.284 "digest": "sha256", 00:21:09.284 "state": "completed" 00:21:09.284 }, 00:21:09.284 "cntlid": 27, 00:21:09.284 "listen_address": { 00:21:09.284 "adrfam": "IPv4", 00:21:09.284 "traddr": "10.0.0.2", 00:21:09.284 "trsvcid": "4420", 00:21:09.284 "trtype": "TCP" 00:21:09.284 }, 00:21:09.284 "peer_address": { 00:21:09.284 "adrfam": "IPv4", 00:21:09.284 "traddr": "10.0.0.1", 00:21:09.284 "trsvcid": "47066", 00:21:09.284 "trtype": "TCP" 00:21:09.284 }, 00:21:09.284 "qid": 0, 00:21:09.284 "state": "enabled", 00:21:09.284 "thread": "nvmf_tgt_poll_group_000" 00:21:09.284 } 00:21:09.284 ]' 00:21:09.284 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:09.284 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:09.284 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:09.284 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:09.284 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:09.543 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:09.543 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:09.543 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:09.800 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --hostid 0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-secret DHHC-1:01:ZGE0NTk0NWEyNjIzOWU1ODE5NDA3MDQ0MDY0NGE4ZWZdneta: --dhchap-ctrl-secret DHHC-1:02:ZDNmNzVmNmE2ZjBkZWJmMmJiOWYwNWNlODUxMDVmMGFlMDc1NmQwOWRlM2M0ZjE5utXpNg==: 00:21:10.367 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:10.367 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:10.367 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:21:10.367 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.367 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.367 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.367 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:10.367 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:10.367 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:10.625 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:21:10.625 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:10.625 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:10.625 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:10.625 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:10.625 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:10.625 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:10.625 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.625 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.625 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.625 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:10.625 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:10.882 00:21:11.140 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:11.140 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:11.140 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:11.398 18:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:11.398 18:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:11.398 18:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:11.398 18:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.398 18:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:11.398 18:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:11.398 { 00:21:11.398 "auth": { 00:21:11.398 "dhgroup": "ffdhe4096", 00:21:11.398 "digest": "sha256", 00:21:11.398 "state": "completed" 00:21:11.398 }, 00:21:11.398 "cntlid": 29, 00:21:11.398 "listen_address": { 00:21:11.398 "adrfam": "IPv4", 00:21:11.398 "traddr": "10.0.0.2", 00:21:11.398 "trsvcid": "4420", 00:21:11.398 "trtype": "TCP" 00:21:11.398 }, 00:21:11.398 "peer_address": { 00:21:11.398 "adrfam": "IPv4", 00:21:11.398 "traddr": "10.0.0.1", 00:21:11.398 "trsvcid": "47102", 00:21:11.398 "trtype": "TCP" 00:21:11.398 }, 00:21:11.398 "qid": 0, 00:21:11.398 "state": "enabled", 00:21:11.398 "thread": "nvmf_tgt_poll_group_000" 00:21:11.398 } 00:21:11.398 ]' 00:21:11.398 18:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:11.398 18:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:11.398 18:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:11.398 18:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:11.398 18:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:11.398 18:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:11.398 18:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:11.398 18:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:11.656 18:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --hostid 0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-secret DHHC-1:02:MGYzNzg3ZjM5Zjc5Njk4M2I4MGE0YWExOGYxYTVkYmUxNDY4NGRjZWY0NmZkNDdlNqeI4Q==: --dhchap-ctrl-secret DHHC-1:01:NDg0ZjRmMjdlODliODE5MTM0MjliNGZiMzgwZDZkZDfpWGLp: 00:21:12.610 18:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:12.610 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:12.610 18:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:21:12.610 18:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:12.610 18:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.610 18:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:12.610 18:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:12.610 18:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:12.610 18:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:12.883 18:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:21:12.883 18:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:12.883 18:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:12.883 18:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:12.883 18:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:12.883 18:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:12.883 18:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-key key3 00:21:12.883 18:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:12.883 18:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.883 18:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:12.883 18:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:12.883 18:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:13.141 00:21:13.141 18:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:13.141 18:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:13.141 18:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:13.397 18:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:13.397 18:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:13.397 18:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.397 18:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.397 18:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.397 18:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:13.397 { 00:21:13.397 "auth": { 00:21:13.397 "dhgroup": "ffdhe4096", 00:21:13.397 "digest": "sha256", 00:21:13.397 "state": "completed" 00:21:13.397 }, 00:21:13.397 "cntlid": 31, 00:21:13.397 "listen_address": { 00:21:13.397 "adrfam": "IPv4", 00:21:13.397 "traddr": "10.0.0.2", 00:21:13.397 "trsvcid": "4420", 00:21:13.397 "trtype": "TCP" 00:21:13.397 }, 00:21:13.397 "peer_address": { 00:21:13.398 "adrfam": "IPv4", 00:21:13.398 "traddr": "10.0.0.1", 00:21:13.398 "trsvcid": "39894", 00:21:13.398 "trtype": "TCP" 00:21:13.398 }, 00:21:13.398 "qid": 0, 00:21:13.398 "state": "enabled", 00:21:13.398 "thread": "nvmf_tgt_poll_group_000" 00:21:13.398 } 00:21:13.398 ]' 00:21:13.398 18:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:13.654 18:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:13.654 18:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:13.654 18:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:13.654 18:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:13.654 18:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:13.654 18:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:13.654 18:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:13.912 18:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --hostid 0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-secret DHHC-1:03:MTFhOTg2OTBjNTdlY2QyZjZjYzcwZDQ1YWJhZDM4NTFiNDQxY2U1OWQzMDVmNWIxMmJlYjlmOTFmNGM3NTRlMB/K07c=: 00:21:14.845 18:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:14.845 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:14.845 18:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:21:14.845 18:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:14.845 18:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.845 18:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:14.845 18:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:14.845 18:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:14.845 18:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:14.845 18:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:14.845 18:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:21:14.845 18:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:14.845 18:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:14.845 18:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:14.845 18:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:14.845 18:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:14.845 18:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:14.845 18:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:14.845 18:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.845 18:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:14.845 18:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:14.845 18:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:15.455 00:21:15.455 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:15.455 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:15.455 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:15.713 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:15.713 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:15.713 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:15.713 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.713 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:15.713 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:15.713 { 00:21:15.713 "auth": { 00:21:15.713 "dhgroup": "ffdhe6144", 00:21:15.713 "digest": "sha256", 00:21:15.713 "state": "completed" 00:21:15.713 }, 00:21:15.713 "cntlid": 33, 00:21:15.713 "listen_address": { 00:21:15.713 "adrfam": "IPv4", 00:21:15.713 "traddr": "10.0.0.2", 00:21:15.713 "trsvcid": "4420", 00:21:15.713 "trtype": "TCP" 00:21:15.713 }, 00:21:15.713 "peer_address": { 00:21:15.713 "adrfam": "IPv4", 00:21:15.713 "traddr": "10.0.0.1", 00:21:15.713 "trsvcid": "39936", 00:21:15.713 "trtype": "TCP" 00:21:15.713 }, 00:21:15.713 "qid": 0, 00:21:15.713 "state": "enabled", 00:21:15.713 "thread": "nvmf_tgt_poll_group_000" 00:21:15.713 } 00:21:15.713 ]' 00:21:15.713 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:15.713 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:15.713 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:15.713 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:15.713 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:15.971 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:15.971 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:15.971 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:16.229 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --hostid 0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-secret DHHC-1:00:OTA0ZWZlYWE4NmVkZTI2N2EzZTU3NGNkOWMxZWJjNDA4ZWFjNmFlMTAwZDRhMzYy01x3lw==: --dhchap-ctrl-secret DHHC-1:03:OTBjMGE0YmU2NDdjZjYzMDU0NGU0MzE4Y2FjMTQwMWE2NTgyY2RlMGRlM2EwMWExMzQ0N2Y2N2M4N2YwZGNmNWRVl8M=: 00:21:16.794 18:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:16.794 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:16.794 18:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:21:16.794 18:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.794 18:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.794 18:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.794 18:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:16.794 18:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:16.794 18:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:17.053 18:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:21:17.053 18:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:17.053 18:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:17.053 18:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:17.053 18:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:17.053 18:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:17.053 18:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:17.053 18:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:17.053 18:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.053 18:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:17.053 18:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:17.053 18:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:17.620 00:21:17.620 18:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:17.620 18:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:17.620 18:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:17.877 18:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:17.877 18:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:17.877 18:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:17.877 18:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.877 18:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:17.878 18:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:17.878 { 00:21:17.878 "auth": { 00:21:17.878 "dhgroup": "ffdhe6144", 00:21:17.878 "digest": "sha256", 00:21:17.878 "state": "completed" 00:21:17.878 }, 00:21:17.878 "cntlid": 35, 00:21:17.878 "listen_address": { 00:21:17.878 "adrfam": "IPv4", 00:21:17.878 "traddr": "10.0.0.2", 00:21:17.878 "trsvcid": "4420", 00:21:17.878 "trtype": "TCP" 00:21:17.878 }, 00:21:17.878 "peer_address": { 00:21:17.878 "adrfam": "IPv4", 00:21:17.878 "traddr": "10.0.0.1", 00:21:17.878 "trsvcid": "39964", 00:21:17.878 "trtype": "TCP" 00:21:17.878 }, 00:21:17.878 "qid": 0, 00:21:17.878 "state": "enabled", 00:21:17.878 "thread": "nvmf_tgt_poll_group_000" 00:21:17.878 } 00:21:17.878 ]' 00:21:17.878 18:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:17.878 18:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:17.878 18:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:17.878 18:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:17.878 18:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:18.135 18:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:18.135 18:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:18.135 18:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:18.393 18:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --hostid 0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-secret DHHC-1:01:ZGE0NTk0NWEyNjIzOWU1ODE5NDA3MDQ0MDY0NGE4ZWZdneta: --dhchap-ctrl-secret DHHC-1:02:ZDNmNzVmNmE2ZjBkZWJmMmJiOWYwNWNlODUxMDVmMGFlMDc1NmQwOWRlM2M0ZjE5utXpNg==: 00:21:18.960 18:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:18.960 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:18.960 18:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:21:18.960 18:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:18.960 18:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.218 18:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:19.218 18:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:19.218 18:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:19.219 18:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:19.219 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:21:19.219 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:19.219 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:19.219 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:19.219 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:19.219 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:19.219 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:19.219 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:19.219 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.477 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:19.477 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:19.477 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:19.736 00:21:19.736 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:19.736 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:19.736 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:19.995 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:19.995 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:19.995 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:19.995 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.995 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:19.995 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:19.995 { 00:21:19.995 "auth": { 00:21:19.995 "dhgroup": "ffdhe6144", 00:21:19.995 "digest": "sha256", 00:21:19.995 "state": "completed" 00:21:19.995 }, 00:21:19.995 "cntlid": 37, 00:21:19.995 "listen_address": { 00:21:19.995 "adrfam": "IPv4", 00:21:19.995 "traddr": "10.0.0.2", 00:21:19.995 "trsvcid": "4420", 00:21:19.995 "trtype": "TCP" 00:21:19.995 }, 00:21:19.995 "peer_address": { 00:21:19.995 "adrfam": "IPv4", 00:21:19.995 "traddr": "10.0.0.1", 00:21:19.995 "trsvcid": "39994", 00:21:19.995 "trtype": "TCP" 00:21:19.995 }, 00:21:19.995 "qid": 0, 00:21:19.995 "state": "enabled", 00:21:19.995 "thread": "nvmf_tgt_poll_group_000" 00:21:19.995 } 00:21:19.995 ]' 00:21:19.995 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:20.253 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:20.253 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:20.253 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:20.253 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:20.253 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:20.253 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:20.253 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:20.511 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --hostid 0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-secret DHHC-1:02:MGYzNzg3ZjM5Zjc5Njk4M2I4MGE0YWExOGYxYTVkYmUxNDY4NGRjZWY0NmZkNDdlNqeI4Q==: --dhchap-ctrl-secret DHHC-1:01:NDg0ZjRmMjdlODliODE5MTM0MjliNGZiMzgwZDZkZDfpWGLp: 00:21:21.077 18:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:21.077 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:21.077 18:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:21:21.077 18:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:21.077 18:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.077 18:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:21.077 18:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:21.077 18:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:21.077 18:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:21.336 18:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:21:21.336 18:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:21.336 18:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:21.336 18:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:21.336 18:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:21.336 18:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:21.336 18:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-key key3 00:21:21.336 18:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:21.336 18:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.336 18:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:21.336 18:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:21.336 18:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:21.902 00:21:21.902 18:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:21.902 18:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:21.902 18:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:22.160 18:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:22.160 18:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:22.160 18:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:22.160 18:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.160 18:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:22.160 18:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:22.160 { 00:21:22.160 "auth": { 00:21:22.160 "dhgroup": "ffdhe6144", 00:21:22.160 "digest": "sha256", 00:21:22.160 "state": "completed" 00:21:22.160 }, 00:21:22.160 "cntlid": 39, 00:21:22.160 "listen_address": { 00:21:22.160 "adrfam": "IPv4", 00:21:22.160 "traddr": "10.0.0.2", 00:21:22.160 "trsvcid": "4420", 00:21:22.160 "trtype": "TCP" 00:21:22.160 }, 00:21:22.160 "peer_address": { 00:21:22.160 "adrfam": "IPv4", 00:21:22.160 "traddr": "10.0.0.1", 00:21:22.160 "trsvcid": "49250", 00:21:22.160 "trtype": "TCP" 00:21:22.160 }, 00:21:22.160 "qid": 0, 00:21:22.160 "state": "enabled", 00:21:22.160 "thread": "nvmf_tgt_poll_group_000" 00:21:22.160 } 00:21:22.160 ]' 00:21:22.160 18:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:22.160 18:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:22.160 18:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:22.160 18:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:22.160 18:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:22.418 18:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:22.418 18:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:22.418 18:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:22.677 18:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --hostid 0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-secret DHHC-1:03:MTFhOTg2OTBjNTdlY2QyZjZjYzcwZDQ1YWJhZDM4NTFiNDQxY2U1OWQzMDVmNWIxMmJlYjlmOTFmNGM3NTRlMB/K07c=: 00:21:23.244 18:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:23.244 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:23.244 18:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:21:23.244 18:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:23.244 18:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.244 18:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:23.244 18:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:23.244 18:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:23.244 18:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:23.244 18:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:23.502 18:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:21:23.502 18:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:23.502 18:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:23.502 18:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:23.502 18:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:23.502 18:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:23.502 18:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:23.502 18:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:23.502 18:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.502 18:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:23.502 18:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:23.502 18:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:24.440 00:21:24.440 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:24.440 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:24.440 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:24.440 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:24.440 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:24.440 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:24.440 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.440 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:24.440 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:24.440 { 00:21:24.440 "auth": { 00:21:24.440 "dhgroup": "ffdhe8192", 00:21:24.440 "digest": "sha256", 00:21:24.440 "state": "completed" 00:21:24.440 }, 00:21:24.440 "cntlid": 41, 00:21:24.440 "listen_address": { 00:21:24.440 "adrfam": "IPv4", 00:21:24.440 "traddr": "10.0.0.2", 00:21:24.440 "trsvcid": "4420", 00:21:24.440 "trtype": "TCP" 00:21:24.440 }, 00:21:24.440 "peer_address": { 00:21:24.440 "adrfam": "IPv4", 00:21:24.440 "traddr": "10.0.0.1", 00:21:24.440 "trsvcid": "49264", 00:21:24.440 "trtype": "TCP" 00:21:24.440 }, 00:21:24.440 "qid": 0, 00:21:24.440 "state": "enabled", 00:21:24.440 "thread": "nvmf_tgt_poll_group_000" 00:21:24.440 } 00:21:24.440 ]' 00:21:24.699 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:24.699 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:24.699 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:24.699 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:24.699 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:24.699 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:24.699 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:24.699 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:24.958 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --hostid 0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-secret DHHC-1:00:OTA0ZWZlYWE4NmVkZTI2N2EzZTU3NGNkOWMxZWJjNDA4ZWFjNmFlMTAwZDRhMzYy01x3lw==: --dhchap-ctrl-secret DHHC-1:03:OTBjMGE0YmU2NDdjZjYzMDU0NGU0MzE4Y2FjMTQwMWE2NTgyY2RlMGRlM2EwMWExMzQ0N2Y2N2M4N2YwZGNmNWRVl8M=: 00:21:25.529 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:25.529 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:25.529 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:21:25.529 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:25.529 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.529 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:25.529 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:25.529 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:25.529 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:25.826 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:21:25.826 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:25.826 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:25.826 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:25.827 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:25.827 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:25.827 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:25.827 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:25.827 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.085 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:26.085 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:26.085 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:26.652 00:21:26.652 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:26.652 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:26.652 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:26.911 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:26.911 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:26.911 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:26.911 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.911 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:26.911 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:26.911 { 00:21:26.911 "auth": { 00:21:26.911 "dhgroup": "ffdhe8192", 00:21:26.911 "digest": "sha256", 00:21:26.911 "state": "completed" 00:21:26.911 }, 00:21:26.911 "cntlid": 43, 00:21:26.911 "listen_address": { 00:21:26.911 "adrfam": "IPv4", 00:21:26.911 "traddr": "10.0.0.2", 00:21:26.911 "trsvcid": "4420", 00:21:26.911 "trtype": "TCP" 00:21:26.911 }, 00:21:26.911 "peer_address": { 00:21:26.911 "adrfam": "IPv4", 00:21:26.911 "traddr": "10.0.0.1", 00:21:26.911 "trsvcid": "49286", 00:21:26.911 "trtype": "TCP" 00:21:26.911 }, 00:21:26.911 "qid": 0, 00:21:26.911 "state": "enabled", 00:21:26.911 "thread": "nvmf_tgt_poll_group_000" 00:21:26.911 } 00:21:26.911 ]' 00:21:26.911 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:26.911 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:26.911 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:27.169 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:27.169 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:27.169 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:27.169 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:27.169 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:27.427 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --hostid 0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-secret DHHC-1:01:ZGE0NTk0NWEyNjIzOWU1ODE5NDA3MDQ0MDY0NGE4ZWZdneta: --dhchap-ctrl-secret DHHC-1:02:ZDNmNzVmNmE2ZjBkZWJmMmJiOWYwNWNlODUxMDVmMGFlMDc1NmQwOWRlM2M0ZjE5utXpNg==: 00:21:27.992 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:27.992 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:27.992 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:21:27.992 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:27.992 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.282 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:28.282 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:28.282 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:28.282 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:28.566 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:21:28.566 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:28.566 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:28.566 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:28.566 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:28.566 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:28.566 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:28.566 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.566 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.566 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:28.566 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:28.566 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:29.132 00:21:29.132 18:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:29.132 18:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:29.132 18:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:29.390 18:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:29.390 18:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:29.390 18:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:29.390 18:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.390 18:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:29.390 18:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:29.390 { 00:21:29.390 "auth": { 00:21:29.390 "dhgroup": "ffdhe8192", 00:21:29.390 "digest": "sha256", 00:21:29.390 "state": "completed" 00:21:29.390 }, 00:21:29.390 "cntlid": 45, 00:21:29.390 "listen_address": { 00:21:29.390 "adrfam": "IPv4", 00:21:29.390 "traddr": "10.0.0.2", 00:21:29.390 "trsvcid": "4420", 00:21:29.390 "trtype": "TCP" 00:21:29.390 }, 00:21:29.390 "peer_address": { 00:21:29.390 "adrfam": "IPv4", 00:21:29.390 "traddr": "10.0.0.1", 00:21:29.390 "trsvcid": "49318", 00:21:29.390 "trtype": "TCP" 00:21:29.390 }, 00:21:29.390 "qid": 0, 00:21:29.390 "state": "enabled", 00:21:29.390 "thread": "nvmf_tgt_poll_group_000" 00:21:29.390 } 00:21:29.390 ]' 00:21:29.390 18:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:29.390 18:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:29.390 18:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:29.649 18:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:29.649 18:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:29.649 18:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:29.649 18:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:29.649 18:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:29.908 18:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --hostid 0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-secret DHHC-1:02:MGYzNzg3ZjM5Zjc5Njk4M2I4MGE0YWExOGYxYTVkYmUxNDY4NGRjZWY0NmZkNDdlNqeI4Q==: --dhchap-ctrl-secret DHHC-1:01:NDg0ZjRmMjdlODliODE5MTM0MjliNGZiMzgwZDZkZDfpWGLp: 00:21:30.475 18:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:30.475 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:30.475 18:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:21:30.475 18:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:30.475 18:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.475 18:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:30.475 18:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:30.475 18:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:30.475 18:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:31.041 18:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:21:31.041 18:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:31.041 18:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:31.041 18:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:31.041 18:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:31.041 18:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:31.041 18:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-key key3 00:21:31.041 18:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.041 18:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.041 18:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.041 18:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:31.041 18:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:31.607 00:21:31.607 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:31.608 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:31.608 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:31.865 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:31.865 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:31.865 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.865 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.865 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.865 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:31.865 { 00:21:31.865 "auth": { 00:21:31.865 "dhgroup": "ffdhe8192", 00:21:31.865 "digest": "sha256", 00:21:31.865 "state": "completed" 00:21:31.865 }, 00:21:31.865 "cntlid": 47, 00:21:31.865 "listen_address": { 00:21:31.865 "adrfam": "IPv4", 00:21:31.865 "traddr": "10.0.0.2", 00:21:31.865 "trsvcid": "4420", 00:21:31.865 "trtype": "TCP" 00:21:31.865 }, 00:21:31.865 "peer_address": { 00:21:31.865 "adrfam": "IPv4", 00:21:31.865 "traddr": "10.0.0.1", 00:21:31.865 "trsvcid": "49338", 00:21:31.865 "trtype": "TCP" 00:21:31.865 }, 00:21:31.865 "qid": 0, 00:21:31.865 "state": "enabled", 00:21:31.865 "thread": "nvmf_tgt_poll_group_000" 00:21:31.865 } 00:21:31.865 ]' 00:21:31.865 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:31.865 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:31.865 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:31.865 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:31.865 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:31.865 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:31.865 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:31.865 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:32.124 18:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --hostid 0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-secret DHHC-1:03:MTFhOTg2OTBjNTdlY2QyZjZjYzcwZDQ1YWJhZDM4NTFiNDQxY2U1OWQzMDVmNWIxMmJlYjlmOTFmNGM3NTRlMB/K07c=: 00:21:33.060 18:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:33.060 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:33.060 18:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:21:33.060 18:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:33.060 18:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.060 18:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:33.060 18:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:21:33.060 18:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:33.060 18:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:33.060 18:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:33.060 18:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:33.060 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:21:33.060 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:33.060 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:33.060 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:33.060 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:33.060 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:33.060 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:33.060 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:33.060 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.060 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:33.060 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:33.060 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:33.633 00:21:33.633 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:33.633 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:33.633 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:33.891 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:33.891 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:33.891 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:33.891 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.891 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:33.891 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:33.891 { 00:21:33.891 "auth": { 00:21:33.891 "dhgroup": "null", 00:21:33.891 "digest": "sha384", 00:21:33.891 "state": "completed" 00:21:33.891 }, 00:21:33.891 "cntlid": 49, 00:21:33.891 "listen_address": { 00:21:33.891 "adrfam": "IPv4", 00:21:33.891 "traddr": "10.0.0.2", 00:21:33.891 "trsvcid": "4420", 00:21:33.891 "trtype": "TCP" 00:21:33.891 }, 00:21:33.891 "peer_address": { 00:21:33.891 "adrfam": "IPv4", 00:21:33.891 "traddr": "10.0.0.1", 00:21:33.891 "trsvcid": "36294", 00:21:33.891 "trtype": "TCP" 00:21:33.891 }, 00:21:33.891 "qid": 0, 00:21:33.891 "state": "enabled", 00:21:33.891 "thread": "nvmf_tgt_poll_group_000" 00:21:33.891 } 00:21:33.891 ]' 00:21:33.891 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:33.891 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:33.891 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:33.891 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:33.891 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:33.891 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:33.891 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:33.891 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:34.150 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --hostid 0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-secret DHHC-1:00:OTA0ZWZlYWE4NmVkZTI2N2EzZTU3NGNkOWMxZWJjNDA4ZWFjNmFlMTAwZDRhMzYy01x3lw==: --dhchap-ctrl-secret DHHC-1:03:OTBjMGE0YmU2NDdjZjYzMDU0NGU0MzE4Y2FjMTQwMWE2NTgyY2RlMGRlM2EwMWExMzQ0N2Y2N2M4N2YwZGNmNWRVl8M=: 00:21:35.085 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:35.085 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:35.085 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:21:35.085 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.085 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.085 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.085 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:35.085 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:35.085 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:35.343 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:21:35.343 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:35.343 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:35.343 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:35.343 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:35.343 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:35.343 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:35.343 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.343 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.343 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.343 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:35.343 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:35.601 00:21:35.601 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:35.601 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:35.601 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:35.859 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:35.859 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:35.859 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.859 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.859 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.859 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:35.859 { 00:21:35.859 "auth": { 00:21:35.859 "dhgroup": "null", 00:21:35.859 "digest": "sha384", 00:21:35.859 "state": "completed" 00:21:35.859 }, 00:21:35.859 "cntlid": 51, 00:21:35.859 "listen_address": { 00:21:35.859 "adrfam": "IPv4", 00:21:35.859 "traddr": "10.0.0.2", 00:21:35.859 "trsvcid": "4420", 00:21:35.859 "trtype": "TCP" 00:21:35.859 }, 00:21:35.859 "peer_address": { 00:21:35.859 "adrfam": "IPv4", 00:21:35.859 "traddr": "10.0.0.1", 00:21:35.859 "trsvcid": "36322", 00:21:35.859 "trtype": "TCP" 00:21:35.859 }, 00:21:35.859 "qid": 0, 00:21:35.859 "state": "enabled", 00:21:35.859 "thread": "nvmf_tgt_poll_group_000" 00:21:35.859 } 00:21:35.859 ]' 00:21:35.859 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:36.117 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:36.117 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:36.117 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:36.117 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:36.117 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:36.117 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:36.117 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:36.375 18:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --hostid 0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-secret DHHC-1:01:ZGE0NTk0NWEyNjIzOWU1ODE5NDA3MDQ0MDY0NGE4ZWZdneta: --dhchap-ctrl-secret DHHC-1:02:ZDNmNzVmNmE2ZjBkZWJmMmJiOWYwNWNlODUxMDVmMGFlMDc1NmQwOWRlM2M0ZjE5utXpNg==: 00:21:37.310 18:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:37.310 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:37.310 18:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:21:37.310 18:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.310 18:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.310 18:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:37.310 18:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:37.310 18:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:37.310 18:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:37.310 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:21:37.310 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:37.310 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:37.310 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:37.310 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:37.310 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:37.310 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:37.310 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.310 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.310 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:37.310 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:37.310 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:37.568 00:21:37.827 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:37.827 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:37.827 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:37.827 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:37.827 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:37.827 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.827 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.085 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:38.085 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:38.085 { 00:21:38.085 "auth": { 00:21:38.085 "dhgroup": "null", 00:21:38.085 "digest": "sha384", 00:21:38.085 "state": "completed" 00:21:38.085 }, 00:21:38.085 "cntlid": 53, 00:21:38.085 "listen_address": { 00:21:38.085 "adrfam": "IPv4", 00:21:38.085 "traddr": "10.0.0.2", 00:21:38.085 "trsvcid": "4420", 00:21:38.085 "trtype": "TCP" 00:21:38.085 }, 00:21:38.085 "peer_address": { 00:21:38.085 "adrfam": "IPv4", 00:21:38.085 "traddr": "10.0.0.1", 00:21:38.085 "trsvcid": "36366", 00:21:38.085 "trtype": "TCP" 00:21:38.085 }, 00:21:38.085 "qid": 0, 00:21:38.085 "state": "enabled", 00:21:38.085 "thread": "nvmf_tgt_poll_group_000" 00:21:38.085 } 00:21:38.085 ]' 00:21:38.085 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:38.085 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:38.085 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:38.085 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:38.085 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:38.085 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:38.085 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:38.085 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:38.343 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --hostid 0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-secret DHHC-1:02:MGYzNzg3ZjM5Zjc5Njk4M2I4MGE0YWExOGYxYTVkYmUxNDY4NGRjZWY0NmZkNDdlNqeI4Q==: --dhchap-ctrl-secret DHHC-1:01:NDg0ZjRmMjdlODliODE5MTM0MjliNGZiMzgwZDZkZDfpWGLp: 00:21:39.287 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:39.287 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:39.287 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:21:39.287 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.287 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.287 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.287 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:39.287 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:39.287 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:39.287 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:21:39.287 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:39.287 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:39.287 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:39.287 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:39.287 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:39.287 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-key key3 00:21:39.287 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.287 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.287 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.287 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:39.287 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:39.854 00:21:39.854 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:39.854 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:39.854 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:40.112 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:40.112 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:40.112 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:40.112 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.112 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:40.112 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:40.112 { 00:21:40.112 "auth": { 00:21:40.112 "dhgroup": "null", 00:21:40.112 "digest": "sha384", 00:21:40.112 "state": "completed" 00:21:40.112 }, 00:21:40.112 "cntlid": 55, 00:21:40.112 "listen_address": { 00:21:40.113 "adrfam": "IPv4", 00:21:40.113 "traddr": "10.0.0.2", 00:21:40.113 "trsvcid": "4420", 00:21:40.113 "trtype": "TCP" 00:21:40.113 }, 00:21:40.113 "peer_address": { 00:21:40.113 "adrfam": "IPv4", 00:21:40.113 "traddr": "10.0.0.1", 00:21:40.113 "trsvcid": "36402", 00:21:40.113 "trtype": "TCP" 00:21:40.113 }, 00:21:40.113 "qid": 0, 00:21:40.113 "state": "enabled", 00:21:40.113 "thread": "nvmf_tgt_poll_group_000" 00:21:40.113 } 00:21:40.113 ]' 00:21:40.113 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:40.113 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:40.113 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:40.113 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:40.113 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:40.371 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:40.371 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:40.371 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:40.628 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --hostid 0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-secret DHHC-1:03:MTFhOTg2OTBjNTdlY2QyZjZjYzcwZDQ1YWJhZDM4NTFiNDQxY2U1OWQzMDVmNWIxMmJlYjlmOTFmNGM3NTRlMB/K07c=: 00:21:41.196 18:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:41.196 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:41.196 18:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:21:41.196 18:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:41.196 18:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.196 18:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:41.196 18:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:41.196 18:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:41.196 18:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:41.196 18:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:41.455 18:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:21:41.455 18:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:41.455 18:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:41.455 18:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:41.455 18:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:41.455 18:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:41.455 18:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:41.455 18:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:41.455 18:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.715 18:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:41.715 18:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:41.715 18:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:41.974 00:21:41.974 18:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:41.974 18:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:41.974 18:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:42.233 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:42.233 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:42.233 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:42.233 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.233 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:42.233 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:42.233 { 00:21:42.233 "auth": { 00:21:42.233 "dhgroup": "ffdhe2048", 00:21:42.233 "digest": "sha384", 00:21:42.233 "state": "completed" 00:21:42.233 }, 00:21:42.233 "cntlid": 57, 00:21:42.233 "listen_address": { 00:21:42.233 "adrfam": "IPv4", 00:21:42.233 "traddr": "10.0.0.2", 00:21:42.233 "trsvcid": "4420", 00:21:42.233 "trtype": "TCP" 00:21:42.233 }, 00:21:42.233 "peer_address": { 00:21:42.233 "adrfam": "IPv4", 00:21:42.233 "traddr": "10.0.0.1", 00:21:42.233 "trsvcid": "47810", 00:21:42.233 "trtype": "TCP" 00:21:42.233 }, 00:21:42.233 "qid": 0, 00:21:42.233 "state": "enabled", 00:21:42.233 "thread": "nvmf_tgt_poll_group_000" 00:21:42.233 } 00:21:42.233 ]' 00:21:42.233 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:42.233 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:42.233 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:42.233 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:42.233 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:42.233 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:42.233 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:42.233 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:42.801 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --hostid 0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-secret DHHC-1:00:OTA0ZWZlYWE4NmVkZTI2N2EzZTU3NGNkOWMxZWJjNDA4ZWFjNmFlMTAwZDRhMzYy01x3lw==: --dhchap-ctrl-secret DHHC-1:03:OTBjMGE0YmU2NDdjZjYzMDU0NGU0MzE4Y2FjMTQwMWE2NTgyY2RlMGRlM2EwMWExMzQ0N2Y2N2M4N2YwZGNmNWRVl8M=: 00:21:43.367 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:43.367 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:43.367 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:21:43.367 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.367 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.367 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.367 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:43.367 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:43.367 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:43.626 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:21:43.626 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:43.626 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:43.626 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:43.626 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:43.626 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:43.626 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:43.626 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.626 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.626 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.626 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:43.626 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:43.884 00:21:43.884 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:43.884 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:43.884 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:44.142 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:44.142 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:44.142 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.142 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.142 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.142 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:44.142 { 00:21:44.142 "auth": { 00:21:44.142 "dhgroup": "ffdhe2048", 00:21:44.142 "digest": "sha384", 00:21:44.142 "state": "completed" 00:21:44.142 }, 00:21:44.142 "cntlid": 59, 00:21:44.142 "listen_address": { 00:21:44.142 "adrfam": "IPv4", 00:21:44.142 "traddr": "10.0.0.2", 00:21:44.142 "trsvcid": "4420", 00:21:44.142 "trtype": "TCP" 00:21:44.142 }, 00:21:44.142 "peer_address": { 00:21:44.142 "adrfam": "IPv4", 00:21:44.142 "traddr": "10.0.0.1", 00:21:44.142 "trsvcid": "47838", 00:21:44.142 "trtype": "TCP" 00:21:44.142 }, 00:21:44.142 "qid": 0, 00:21:44.142 "state": "enabled", 00:21:44.142 "thread": "nvmf_tgt_poll_group_000" 00:21:44.142 } 00:21:44.142 ]' 00:21:44.142 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:44.142 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:44.142 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:44.401 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:44.401 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:44.401 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:44.401 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:44.401 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:44.657 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --hostid 0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-secret DHHC-1:01:ZGE0NTk0NWEyNjIzOWU1ODE5NDA3MDQ0MDY0NGE4ZWZdneta: --dhchap-ctrl-secret DHHC-1:02:ZDNmNzVmNmE2ZjBkZWJmMmJiOWYwNWNlODUxMDVmMGFlMDc1NmQwOWRlM2M0ZjE5utXpNg==: 00:21:45.223 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:45.223 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:45.223 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:21:45.223 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.223 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.223 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.223 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:45.223 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:45.223 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:45.789 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:21:45.789 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:45.789 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:45.789 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:45.789 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:45.789 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:45.790 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:45.790 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.790 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.790 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.790 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:45.790 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:46.048 00:21:46.048 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:46.048 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:46.048 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:46.612 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:46.612 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:46.612 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.612 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.612 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.612 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:46.612 { 00:21:46.612 "auth": { 00:21:46.612 "dhgroup": "ffdhe2048", 00:21:46.612 "digest": "sha384", 00:21:46.612 "state": "completed" 00:21:46.612 }, 00:21:46.612 "cntlid": 61, 00:21:46.612 "listen_address": { 00:21:46.612 "adrfam": "IPv4", 00:21:46.612 "traddr": "10.0.0.2", 00:21:46.612 "trsvcid": "4420", 00:21:46.612 "trtype": "TCP" 00:21:46.612 }, 00:21:46.612 "peer_address": { 00:21:46.612 "adrfam": "IPv4", 00:21:46.612 "traddr": "10.0.0.1", 00:21:46.612 "trsvcid": "47868", 00:21:46.612 "trtype": "TCP" 00:21:46.612 }, 00:21:46.612 "qid": 0, 00:21:46.612 "state": "enabled", 00:21:46.612 "thread": "nvmf_tgt_poll_group_000" 00:21:46.612 } 00:21:46.612 ]' 00:21:46.612 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:46.612 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:46.612 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:46.612 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:46.612 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:46.612 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:46.612 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:46.612 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:47.177 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --hostid 0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-secret DHHC-1:02:MGYzNzg3ZjM5Zjc5Njk4M2I4MGE0YWExOGYxYTVkYmUxNDY4NGRjZWY0NmZkNDdlNqeI4Q==: --dhchap-ctrl-secret DHHC-1:01:NDg0ZjRmMjdlODliODE5MTM0MjliNGZiMzgwZDZkZDfpWGLp: 00:21:47.741 18:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:47.741 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:47.741 18:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:21:47.741 18:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.741 18:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.741 18:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.741 18:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:47.741 18:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:47.741 18:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:47.998 18:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:21:47.998 18:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:47.998 18:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:47.998 18:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:47.998 18:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:47.998 18:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:47.998 18:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-key key3 00:21:47.998 18:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.998 18:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.998 18:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.998 18:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:47.998 18:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:48.564 00:21:48.564 18:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:48.564 18:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:48.564 18:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:48.822 18:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:48.822 18:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:48.822 18:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.822 18:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.822 18:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.822 18:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:48.822 { 00:21:48.822 "auth": { 00:21:48.822 "dhgroup": "ffdhe2048", 00:21:48.822 "digest": "sha384", 00:21:48.822 "state": "completed" 00:21:48.822 }, 00:21:48.822 "cntlid": 63, 00:21:48.822 "listen_address": { 00:21:48.822 "adrfam": "IPv4", 00:21:48.822 "traddr": "10.0.0.2", 00:21:48.822 "trsvcid": "4420", 00:21:48.822 "trtype": "TCP" 00:21:48.822 }, 00:21:48.822 "peer_address": { 00:21:48.822 "adrfam": "IPv4", 00:21:48.822 "traddr": "10.0.0.1", 00:21:48.822 "trsvcid": "47884", 00:21:48.822 "trtype": "TCP" 00:21:48.822 }, 00:21:48.822 "qid": 0, 00:21:48.822 "state": "enabled", 00:21:48.822 "thread": "nvmf_tgt_poll_group_000" 00:21:48.822 } 00:21:48.822 ]' 00:21:48.822 18:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:48.822 18:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:48.822 18:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:48.822 18:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:48.822 18:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:48.822 18:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:48.822 18:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:48.822 18:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:49.080 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --hostid 0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-secret DHHC-1:03:MTFhOTg2OTBjNTdlY2QyZjZjYzcwZDQ1YWJhZDM4NTFiNDQxY2U1OWQzMDVmNWIxMmJlYjlmOTFmNGM3NTRlMB/K07c=: 00:21:50.020 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:50.020 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:50.020 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:21:50.020 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.020 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.020 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.020 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:50.020 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:50.020 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:50.020 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:50.020 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:21:50.020 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:50.020 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:50.020 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:50.020 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:50.020 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:50.020 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:50.020 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.020 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.020 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.020 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:50.020 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:50.585 00:21:50.585 18:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:50.585 18:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:50.585 18:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:50.585 18:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:50.585 18:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:50.585 18:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.585 18:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.844 18:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.844 18:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:50.844 { 00:21:50.844 "auth": { 00:21:50.844 "dhgroup": "ffdhe3072", 00:21:50.844 "digest": "sha384", 00:21:50.844 "state": "completed" 00:21:50.844 }, 00:21:50.844 "cntlid": 65, 00:21:50.844 "listen_address": { 00:21:50.844 "adrfam": "IPv4", 00:21:50.844 "traddr": "10.0.0.2", 00:21:50.844 "trsvcid": "4420", 00:21:50.844 "trtype": "TCP" 00:21:50.844 }, 00:21:50.844 "peer_address": { 00:21:50.844 "adrfam": "IPv4", 00:21:50.844 "traddr": "10.0.0.1", 00:21:50.844 "trsvcid": "47896", 00:21:50.844 "trtype": "TCP" 00:21:50.844 }, 00:21:50.844 "qid": 0, 00:21:50.844 "state": "enabled", 00:21:50.844 "thread": "nvmf_tgt_poll_group_000" 00:21:50.844 } 00:21:50.844 ]' 00:21:50.844 18:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:50.844 18:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:50.844 18:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:50.844 18:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:50.844 18:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:50.844 18:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:50.844 18:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:50.844 18:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:51.103 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --hostid 0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-secret DHHC-1:00:OTA0ZWZlYWE4NmVkZTI2N2EzZTU3NGNkOWMxZWJjNDA4ZWFjNmFlMTAwZDRhMzYy01x3lw==: --dhchap-ctrl-secret DHHC-1:03:OTBjMGE0YmU2NDdjZjYzMDU0NGU0MzE4Y2FjMTQwMWE2NTgyY2RlMGRlM2EwMWExMzQ0N2Y2N2M4N2YwZGNmNWRVl8M=: 00:21:52.036 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:52.036 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:52.036 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:21:52.036 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.036 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.036 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.036 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:52.036 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:52.036 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:52.036 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:21:52.036 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:52.036 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:52.036 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:52.036 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:52.036 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:52.036 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:52.036 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.036 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.036 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.036 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:52.036 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:52.601 00:21:52.601 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:52.601 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:52.601 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:52.859 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:52.859 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:52.859 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.859 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.859 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.859 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:52.859 { 00:21:52.859 "auth": { 00:21:52.859 "dhgroup": "ffdhe3072", 00:21:52.859 "digest": "sha384", 00:21:52.859 "state": "completed" 00:21:52.859 }, 00:21:52.859 "cntlid": 67, 00:21:52.859 "listen_address": { 00:21:52.859 "adrfam": "IPv4", 00:21:52.859 "traddr": "10.0.0.2", 00:21:52.859 "trsvcid": "4420", 00:21:52.859 "trtype": "TCP" 00:21:52.859 }, 00:21:52.859 "peer_address": { 00:21:52.859 "adrfam": "IPv4", 00:21:52.859 "traddr": "10.0.0.1", 00:21:52.859 "trsvcid": "33770", 00:21:52.859 "trtype": "TCP" 00:21:52.859 }, 00:21:52.859 "qid": 0, 00:21:52.859 "state": "enabled", 00:21:52.859 "thread": "nvmf_tgt_poll_group_000" 00:21:52.859 } 00:21:52.859 ]' 00:21:52.859 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:52.859 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:52.859 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:52.859 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:52.859 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:52.859 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:52.859 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:52.859 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:53.425 18:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --hostid 0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-secret DHHC-1:01:ZGE0NTk0NWEyNjIzOWU1ODE5NDA3MDQ0MDY0NGE4ZWZdneta: --dhchap-ctrl-secret DHHC-1:02:ZDNmNzVmNmE2ZjBkZWJmMmJiOWYwNWNlODUxMDVmMGFlMDc1NmQwOWRlM2M0ZjE5utXpNg==: 00:21:53.990 18:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:53.990 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:53.990 18:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:21:53.990 18:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:53.990 18:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.990 18:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:53.990 18:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:53.990 18:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:53.990 18:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:54.248 18:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:21:54.248 18:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:54.248 18:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:54.248 18:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:54.248 18:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:54.248 18:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:54.248 18:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:54.248 18:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:54.248 18:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.248 18:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:54.248 18:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:54.248 18:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:54.506 00:21:54.764 18:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:54.764 18:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:54.764 18:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:55.022 18:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:55.022 18:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:55.022 18:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:55.022 18:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.022 18:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:55.022 18:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:55.022 { 00:21:55.022 "auth": { 00:21:55.022 "dhgroup": "ffdhe3072", 00:21:55.022 "digest": "sha384", 00:21:55.022 "state": "completed" 00:21:55.022 }, 00:21:55.022 "cntlid": 69, 00:21:55.022 "listen_address": { 00:21:55.022 "adrfam": "IPv4", 00:21:55.022 "traddr": "10.0.0.2", 00:21:55.022 "trsvcid": "4420", 00:21:55.022 "trtype": "TCP" 00:21:55.022 }, 00:21:55.022 "peer_address": { 00:21:55.022 "adrfam": "IPv4", 00:21:55.022 "traddr": "10.0.0.1", 00:21:55.022 "trsvcid": "33804", 00:21:55.022 "trtype": "TCP" 00:21:55.022 }, 00:21:55.022 "qid": 0, 00:21:55.022 "state": "enabled", 00:21:55.022 "thread": "nvmf_tgt_poll_group_000" 00:21:55.022 } 00:21:55.022 ]' 00:21:55.022 18:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:55.022 18:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:55.022 18:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:55.022 18:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:55.022 18:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:55.022 18:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:55.022 18:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:55.022 18:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:55.281 18:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --hostid 0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-secret DHHC-1:02:MGYzNzg3ZjM5Zjc5Njk4M2I4MGE0YWExOGYxYTVkYmUxNDY4NGRjZWY0NmZkNDdlNqeI4Q==: --dhchap-ctrl-secret DHHC-1:01:NDg0ZjRmMjdlODliODE5MTM0MjliNGZiMzgwZDZkZDfpWGLp: 00:21:56.215 18:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:56.215 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:56.215 18:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:21:56.215 18:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.215 18:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.215 18:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.215 18:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:56.215 18:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:56.215 18:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:56.473 18:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:21:56.473 18:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:56.473 18:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:56.473 18:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:56.473 18:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:56.473 18:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:56.473 18:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-key key3 00:21:56.473 18:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.473 18:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.473 18:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.473 18:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:56.473 18:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:56.731 00:21:56.731 18:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:56.731 18:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:56.731 18:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:56.989 18:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:56.989 18:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:56.989 18:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.989 18:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.989 18:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.989 18:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:56.989 { 00:21:56.989 "auth": { 00:21:56.989 "dhgroup": "ffdhe3072", 00:21:56.989 "digest": "sha384", 00:21:56.989 "state": "completed" 00:21:56.989 }, 00:21:56.989 "cntlid": 71, 00:21:56.990 "listen_address": { 00:21:56.990 "adrfam": "IPv4", 00:21:56.990 "traddr": "10.0.0.2", 00:21:56.990 "trsvcid": "4420", 00:21:56.990 "trtype": "TCP" 00:21:56.990 }, 00:21:56.990 "peer_address": { 00:21:56.990 "adrfam": "IPv4", 00:21:56.990 "traddr": "10.0.0.1", 00:21:56.990 "trsvcid": "33830", 00:21:56.990 "trtype": "TCP" 00:21:56.990 }, 00:21:56.990 "qid": 0, 00:21:56.990 "state": "enabled", 00:21:56.990 "thread": "nvmf_tgt_poll_group_000" 00:21:56.990 } 00:21:56.990 ]' 00:21:56.990 18:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:56.990 18:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:56.990 18:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:57.247 18:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:57.247 18:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:57.247 18:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:57.247 18:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:57.247 18:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:57.506 18:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --hostid 0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-secret DHHC-1:03:MTFhOTg2OTBjNTdlY2QyZjZjYzcwZDQ1YWJhZDM4NTFiNDQxY2U1OWQzMDVmNWIxMmJlYjlmOTFmNGM3NTRlMB/K07c=: 00:21:58.072 18:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:58.072 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:58.072 18:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:21:58.072 18:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.072 18:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.072 18:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.072 18:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:58.072 18:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:58.072 18:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:58.072 18:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:58.639 18:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:21:58.639 18:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:58.639 18:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:58.639 18:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:58.639 18:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:58.639 18:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:58.639 18:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:58.639 18:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.639 18:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.639 18:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.639 18:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:58.639 18:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:58.903 00:21:58.903 18:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:58.903 18:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:58.903 18:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:59.162 18:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:59.162 18:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:59.162 18:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.162 18:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.162 18:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.162 18:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:59.162 { 00:21:59.162 "auth": { 00:21:59.162 "dhgroup": "ffdhe4096", 00:21:59.162 "digest": "sha384", 00:21:59.162 "state": "completed" 00:21:59.162 }, 00:21:59.162 "cntlid": 73, 00:21:59.162 "listen_address": { 00:21:59.162 "adrfam": "IPv4", 00:21:59.162 "traddr": "10.0.0.2", 00:21:59.162 "trsvcid": "4420", 00:21:59.162 "trtype": "TCP" 00:21:59.162 }, 00:21:59.162 "peer_address": { 00:21:59.162 "adrfam": "IPv4", 00:21:59.162 "traddr": "10.0.0.1", 00:21:59.162 "trsvcid": "33848", 00:21:59.162 "trtype": "TCP" 00:21:59.162 }, 00:21:59.162 "qid": 0, 00:21:59.162 "state": "enabled", 00:21:59.162 "thread": "nvmf_tgt_poll_group_000" 00:21:59.162 } 00:21:59.162 ]' 00:21:59.162 18:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:59.162 18:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:59.162 18:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:59.162 18:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:59.162 18:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:59.441 18:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:59.441 18:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:59.441 18:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:59.699 18:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --hostid 0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-secret DHHC-1:00:OTA0ZWZlYWE4NmVkZTI2N2EzZTU3NGNkOWMxZWJjNDA4ZWFjNmFlMTAwZDRhMzYy01x3lw==: --dhchap-ctrl-secret DHHC-1:03:OTBjMGE0YmU2NDdjZjYzMDU0NGU0MzE4Y2FjMTQwMWE2NTgyY2RlMGRlM2EwMWExMzQ0N2Y2N2M4N2YwZGNmNWRVl8M=: 00:22:00.265 18:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:00.265 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:00.265 18:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:22:00.265 18:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:00.265 18:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.265 18:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:00.265 18:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:00.265 18:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:00.265 18:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:00.832 18:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:22:00.832 18:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:00.832 18:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:00.832 18:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:22:00.832 18:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:00.832 18:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:00.832 18:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:00.832 18:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:00.832 18:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.832 18:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:00.832 18:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:00.832 18:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:01.090 00:22:01.090 18:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:01.090 18:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:01.090 18:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:01.348 18:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:01.348 18:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:01.348 18:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:01.348 18:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.348 18:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:01.348 18:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:01.348 { 00:22:01.348 "auth": { 00:22:01.348 "dhgroup": "ffdhe4096", 00:22:01.348 "digest": "sha384", 00:22:01.348 "state": "completed" 00:22:01.348 }, 00:22:01.348 "cntlid": 75, 00:22:01.348 "listen_address": { 00:22:01.348 "adrfam": "IPv4", 00:22:01.348 "traddr": "10.0.0.2", 00:22:01.348 "trsvcid": "4420", 00:22:01.348 "trtype": "TCP" 00:22:01.348 }, 00:22:01.348 "peer_address": { 00:22:01.348 "adrfam": "IPv4", 00:22:01.348 "traddr": "10.0.0.1", 00:22:01.348 "trsvcid": "33876", 00:22:01.348 "trtype": "TCP" 00:22:01.348 }, 00:22:01.348 "qid": 0, 00:22:01.348 "state": "enabled", 00:22:01.348 "thread": "nvmf_tgt_poll_group_000" 00:22:01.348 } 00:22:01.348 ]' 00:22:01.348 18:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:01.348 18:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:01.348 18:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:01.348 18:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:01.348 18:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:01.606 18:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:01.606 18:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:01.606 18:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:01.865 18:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --hostid 0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-secret DHHC-1:01:ZGE0NTk0NWEyNjIzOWU1ODE5NDA3MDQ0MDY0NGE4ZWZdneta: --dhchap-ctrl-secret DHHC-1:02:ZDNmNzVmNmE2ZjBkZWJmMmJiOWYwNWNlODUxMDVmMGFlMDc1NmQwOWRlM2M0ZjE5utXpNg==: 00:22:02.431 18:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:02.431 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:02.431 18:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:22:02.431 18:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.431 18:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.431 18:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.431 18:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:02.431 18:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:02.431 18:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:02.689 18:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:22:02.689 18:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:02.689 18:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:02.689 18:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:22:02.689 18:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:02.689 18:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:02.690 18:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:02.690 18:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.690 18:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.690 18:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.690 18:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:02.690 18:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:03.254 00:22:03.254 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:03.254 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:03.254 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:03.512 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:03.512 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:03.512 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.512 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.512 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:03.512 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:03.512 { 00:22:03.512 "auth": { 00:22:03.512 "dhgroup": "ffdhe4096", 00:22:03.512 "digest": "sha384", 00:22:03.512 "state": "completed" 00:22:03.512 }, 00:22:03.512 "cntlid": 77, 00:22:03.512 "listen_address": { 00:22:03.512 "adrfam": "IPv4", 00:22:03.512 "traddr": "10.0.0.2", 00:22:03.512 "trsvcid": "4420", 00:22:03.512 "trtype": "TCP" 00:22:03.512 }, 00:22:03.512 "peer_address": { 00:22:03.512 "adrfam": "IPv4", 00:22:03.512 "traddr": "10.0.0.1", 00:22:03.512 "trsvcid": "40934", 00:22:03.512 "trtype": "TCP" 00:22:03.512 }, 00:22:03.512 "qid": 0, 00:22:03.512 "state": "enabled", 00:22:03.512 "thread": "nvmf_tgt_poll_group_000" 00:22:03.512 } 00:22:03.512 ]' 00:22:03.512 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:03.512 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:03.512 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:03.512 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:03.512 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:03.512 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:03.512 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:03.512 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:04.077 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --hostid 0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-secret DHHC-1:02:MGYzNzg3ZjM5Zjc5Njk4M2I4MGE0YWExOGYxYTVkYmUxNDY4NGRjZWY0NmZkNDdlNqeI4Q==: --dhchap-ctrl-secret DHHC-1:01:NDg0ZjRmMjdlODliODE5MTM0MjliNGZiMzgwZDZkZDfpWGLp: 00:22:04.642 18:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:04.642 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:04.642 18:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:22:04.642 18:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:04.642 18:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.642 18:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:04.643 18:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:04.643 18:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:04.643 18:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:04.900 18:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:22:04.900 18:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:04.900 18:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:04.900 18:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:22:04.900 18:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:04.900 18:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:04.900 18:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-key key3 00:22:04.900 18:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:04.900 18:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.900 18:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:04.900 18:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:04.900 18:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:05.157 00:22:05.415 18:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:05.415 18:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:05.415 18:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:05.673 18:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:05.673 18:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:05.673 18:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.673 18:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.673 18:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.673 18:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:05.673 { 00:22:05.673 "auth": { 00:22:05.673 "dhgroup": "ffdhe4096", 00:22:05.673 "digest": "sha384", 00:22:05.673 "state": "completed" 00:22:05.673 }, 00:22:05.673 "cntlid": 79, 00:22:05.673 "listen_address": { 00:22:05.673 "adrfam": "IPv4", 00:22:05.673 "traddr": "10.0.0.2", 00:22:05.673 "trsvcid": "4420", 00:22:05.673 "trtype": "TCP" 00:22:05.673 }, 00:22:05.673 "peer_address": { 00:22:05.673 "adrfam": "IPv4", 00:22:05.673 "traddr": "10.0.0.1", 00:22:05.673 "trsvcid": "40962", 00:22:05.673 "trtype": "TCP" 00:22:05.673 }, 00:22:05.673 "qid": 0, 00:22:05.673 "state": "enabled", 00:22:05.673 "thread": "nvmf_tgt_poll_group_000" 00:22:05.673 } 00:22:05.673 ]' 00:22:05.673 18:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:05.673 18:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:05.673 18:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:05.673 18:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:05.673 18:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:05.673 18:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:05.673 18:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:05.673 18:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:05.931 18:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --hostid 0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-secret DHHC-1:03:MTFhOTg2OTBjNTdlY2QyZjZjYzcwZDQ1YWJhZDM4NTFiNDQxY2U1OWQzMDVmNWIxMmJlYjlmOTFmNGM3NTRlMB/K07c=: 00:22:06.864 18:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:06.864 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:06.864 18:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:22:06.864 18:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.864 18:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.864 18:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.864 18:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:06.864 18:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:06.864 18:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:06.864 18:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:07.122 18:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:22:07.122 18:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:07.122 18:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:07.122 18:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:07.122 18:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:07.122 18:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:07.122 18:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:07.122 18:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:07.122 18:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.122 18:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:07.122 18:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:07.122 18:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:07.688 00:22:07.688 18:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:07.688 18:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:07.688 18:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:07.945 18:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:07.945 18:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:07.945 18:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:07.945 18:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.945 18:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:07.945 18:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:07.945 { 00:22:07.945 "auth": { 00:22:07.945 "dhgroup": "ffdhe6144", 00:22:07.945 "digest": "sha384", 00:22:07.945 "state": "completed" 00:22:07.945 }, 00:22:07.945 "cntlid": 81, 00:22:07.945 "listen_address": { 00:22:07.945 "adrfam": "IPv4", 00:22:07.945 "traddr": "10.0.0.2", 00:22:07.945 "trsvcid": "4420", 00:22:07.945 "trtype": "TCP" 00:22:07.945 }, 00:22:07.945 "peer_address": { 00:22:07.945 "adrfam": "IPv4", 00:22:07.945 "traddr": "10.0.0.1", 00:22:07.945 "trsvcid": "40992", 00:22:07.945 "trtype": "TCP" 00:22:07.945 }, 00:22:07.945 "qid": 0, 00:22:07.945 "state": "enabled", 00:22:07.945 "thread": "nvmf_tgt_poll_group_000" 00:22:07.945 } 00:22:07.945 ]' 00:22:07.945 18:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:07.945 18:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:07.945 18:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:07.945 18:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:07.945 18:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:08.203 18:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:08.203 18:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:08.203 18:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:08.460 18:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --hostid 0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-secret DHHC-1:00:OTA0ZWZlYWE4NmVkZTI2N2EzZTU3NGNkOWMxZWJjNDA4ZWFjNmFlMTAwZDRhMzYy01x3lw==: --dhchap-ctrl-secret DHHC-1:03:OTBjMGE0YmU2NDdjZjYzMDU0NGU0MzE4Y2FjMTQwMWE2NTgyY2RlMGRlM2EwMWExMzQ0N2Y2N2M4N2YwZGNmNWRVl8M=: 00:22:09.128 18:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:09.128 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:09.128 18:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:22:09.128 18:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:09.128 18:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.128 18:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:09.128 18:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:09.128 18:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:09.128 18:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:09.386 18:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:22:09.386 18:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:09.386 18:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:09.386 18:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:09.386 18:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:09.386 18:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:09.386 18:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:09.386 18:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:09.386 18:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.386 18:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:09.386 18:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:09.386 18:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:09.653 00:22:09.653 18:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:09.653 18:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:09.653 18:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:09.917 18:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:09.917 18:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:09.917 18:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:09.917 18:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.917 18:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:09.917 18:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:09.917 { 00:22:09.917 "auth": { 00:22:09.917 "dhgroup": "ffdhe6144", 00:22:09.917 "digest": "sha384", 00:22:09.917 "state": "completed" 00:22:09.917 }, 00:22:09.917 "cntlid": 83, 00:22:09.917 "listen_address": { 00:22:09.917 "adrfam": "IPv4", 00:22:09.917 "traddr": "10.0.0.2", 00:22:09.917 "trsvcid": "4420", 00:22:09.917 "trtype": "TCP" 00:22:09.917 }, 00:22:09.917 "peer_address": { 00:22:09.917 "adrfam": "IPv4", 00:22:09.917 "traddr": "10.0.0.1", 00:22:09.917 "trsvcid": "41018", 00:22:09.917 "trtype": "TCP" 00:22:09.917 }, 00:22:09.917 "qid": 0, 00:22:09.917 "state": "enabled", 00:22:09.917 "thread": "nvmf_tgt_poll_group_000" 00:22:09.918 } 00:22:09.918 ]' 00:22:09.918 18:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:10.176 18:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:10.176 18:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:10.176 18:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:10.176 18:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:10.176 18:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:10.176 18:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:10.176 18:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:10.433 18:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --hostid 0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-secret DHHC-1:01:ZGE0NTk0NWEyNjIzOWU1ODE5NDA3MDQ0MDY0NGE4ZWZdneta: --dhchap-ctrl-secret DHHC-1:02:ZDNmNzVmNmE2ZjBkZWJmMmJiOWYwNWNlODUxMDVmMGFlMDc1NmQwOWRlM2M0ZjE5utXpNg==: 00:22:11.365 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:11.365 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:11.366 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:22:11.366 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:11.366 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.366 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:11.366 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:11.366 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:11.366 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:11.366 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:22:11.366 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:11.366 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:11.366 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:11.366 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:11.366 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:11.366 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:11.366 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:11.366 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.366 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:11.366 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:11.366 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:11.930 00:22:11.930 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:11.930 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:11.930 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:12.187 18:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:12.187 18:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:12.187 18:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:12.187 18:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.187 18:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:12.187 18:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:12.187 { 00:22:12.187 "auth": { 00:22:12.187 "dhgroup": "ffdhe6144", 00:22:12.187 "digest": "sha384", 00:22:12.187 "state": "completed" 00:22:12.187 }, 00:22:12.187 "cntlid": 85, 00:22:12.187 "listen_address": { 00:22:12.187 "adrfam": "IPv4", 00:22:12.187 "traddr": "10.0.0.2", 00:22:12.187 "trsvcid": "4420", 00:22:12.187 "trtype": "TCP" 00:22:12.187 }, 00:22:12.187 "peer_address": { 00:22:12.187 "adrfam": "IPv4", 00:22:12.187 "traddr": "10.0.0.1", 00:22:12.187 "trsvcid": "48398", 00:22:12.187 "trtype": "TCP" 00:22:12.187 }, 00:22:12.187 "qid": 0, 00:22:12.187 "state": "enabled", 00:22:12.187 "thread": "nvmf_tgt_poll_group_000" 00:22:12.187 } 00:22:12.187 ]' 00:22:12.187 18:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:12.187 18:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:12.187 18:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:12.446 18:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:12.446 18:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:12.446 18:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:12.446 18:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:12.446 18:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:12.705 18:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --hostid 0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-secret DHHC-1:02:MGYzNzg3ZjM5Zjc5Njk4M2I4MGE0YWExOGYxYTVkYmUxNDY4NGRjZWY0NmZkNDdlNqeI4Q==: --dhchap-ctrl-secret DHHC-1:01:NDg0ZjRmMjdlODliODE5MTM0MjliNGZiMzgwZDZkZDfpWGLp: 00:22:13.637 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:13.637 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:13.637 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:22:13.637 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:13.637 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.637 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:13.637 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:13.637 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:13.637 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:13.637 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:22:13.637 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:13.637 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:13.637 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:13.637 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:13.637 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:13.637 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-key key3 00:22:13.637 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:13.637 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.637 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:13.637 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:13.637 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:14.202 00:22:14.202 18:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:14.202 18:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:14.202 18:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:14.460 18:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:14.460 18:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:14.460 18:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:14.460 18:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.460 18:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:14.460 18:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:14.460 { 00:22:14.460 "auth": { 00:22:14.460 "dhgroup": "ffdhe6144", 00:22:14.460 "digest": "sha384", 00:22:14.460 "state": "completed" 00:22:14.460 }, 00:22:14.460 "cntlid": 87, 00:22:14.460 "listen_address": { 00:22:14.460 "adrfam": "IPv4", 00:22:14.460 "traddr": "10.0.0.2", 00:22:14.460 "trsvcid": "4420", 00:22:14.460 "trtype": "TCP" 00:22:14.460 }, 00:22:14.460 "peer_address": { 00:22:14.460 "adrfam": "IPv4", 00:22:14.460 "traddr": "10.0.0.1", 00:22:14.460 "trsvcid": "48420", 00:22:14.460 "trtype": "TCP" 00:22:14.460 }, 00:22:14.460 "qid": 0, 00:22:14.460 "state": "enabled", 00:22:14.460 "thread": "nvmf_tgt_poll_group_000" 00:22:14.460 } 00:22:14.460 ]' 00:22:14.460 18:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:14.460 18:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:14.460 18:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:14.718 18:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:14.718 18:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:14.718 18:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:14.718 18:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:14.718 18:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:14.976 18:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --hostid 0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-secret DHHC-1:03:MTFhOTg2OTBjNTdlY2QyZjZjYzcwZDQ1YWJhZDM4NTFiNDQxY2U1OWQzMDVmNWIxMmJlYjlmOTFmNGM3NTRlMB/K07c=: 00:22:15.542 18:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:15.542 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:15.542 18:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:22:15.542 18:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:15.542 18:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.542 18:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:15.542 18:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:15.542 18:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:15.542 18:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:15.542 18:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:15.800 18:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:22:15.800 18:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:15.800 18:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:15.800 18:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:15.800 18:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:15.800 18:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:15.800 18:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:15.800 18:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:15.800 18:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.800 18:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:15.800 18:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:15.800 18:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:16.735 00:22:16.735 18:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:16.735 18:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:16.735 18:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:16.735 18:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:16.735 18:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:16.735 18:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:16.735 18:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.735 18:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:16.735 18:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:16.735 { 00:22:16.735 "auth": { 00:22:16.735 "dhgroup": "ffdhe8192", 00:22:16.735 "digest": "sha384", 00:22:16.735 "state": "completed" 00:22:16.735 }, 00:22:16.735 "cntlid": 89, 00:22:16.735 "listen_address": { 00:22:16.735 "adrfam": "IPv4", 00:22:16.735 "traddr": "10.0.0.2", 00:22:16.735 "trsvcid": "4420", 00:22:16.735 "trtype": "TCP" 00:22:16.735 }, 00:22:16.735 "peer_address": { 00:22:16.735 "adrfam": "IPv4", 00:22:16.735 "traddr": "10.0.0.1", 00:22:16.735 "trsvcid": "48448", 00:22:16.735 "trtype": "TCP" 00:22:16.735 }, 00:22:16.735 "qid": 0, 00:22:16.735 "state": "enabled", 00:22:16.735 "thread": "nvmf_tgt_poll_group_000" 00:22:16.735 } 00:22:16.735 ]' 00:22:16.735 18:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:16.735 18:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:16.735 18:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:16.994 18:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:16.994 18:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:16.994 18:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:16.994 18:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:16.994 18:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:17.252 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --hostid 0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-secret DHHC-1:00:OTA0ZWZlYWE4NmVkZTI2N2EzZTU3NGNkOWMxZWJjNDA4ZWFjNmFlMTAwZDRhMzYy01x3lw==: --dhchap-ctrl-secret DHHC-1:03:OTBjMGE0YmU2NDdjZjYzMDU0NGU0MzE4Y2FjMTQwMWE2NTgyY2RlMGRlM2EwMWExMzQ0N2Y2N2M4N2YwZGNmNWRVl8M=: 00:22:18.186 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:18.186 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:18.186 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:22:18.186 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:18.186 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.186 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:18.186 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:18.186 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:18.186 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:18.186 18:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:22:18.186 18:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:18.186 18:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:18.186 18:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:18.186 18:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:18.186 18:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:18.186 18:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:18.186 18:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:18.187 18:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.187 18:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:18.187 18:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:18.187 18:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:19.121 00:22:19.121 18:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:19.121 18:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:19.121 18:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:19.121 18:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:19.121 18:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:19.121 18:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.121 18:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.121 18:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:19.121 18:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:19.121 { 00:22:19.121 "auth": { 00:22:19.121 "dhgroup": "ffdhe8192", 00:22:19.121 "digest": "sha384", 00:22:19.121 "state": "completed" 00:22:19.121 }, 00:22:19.121 "cntlid": 91, 00:22:19.121 "listen_address": { 00:22:19.121 "adrfam": "IPv4", 00:22:19.121 "traddr": "10.0.0.2", 00:22:19.121 "trsvcid": "4420", 00:22:19.121 "trtype": "TCP" 00:22:19.121 }, 00:22:19.121 "peer_address": { 00:22:19.121 "adrfam": "IPv4", 00:22:19.121 "traddr": "10.0.0.1", 00:22:19.121 "trsvcid": "48478", 00:22:19.121 "trtype": "TCP" 00:22:19.121 }, 00:22:19.121 "qid": 0, 00:22:19.121 "state": "enabled", 00:22:19.121 "thread": "nvmf_tgt_poll_group_000" 00:22:19.121 } 00:22:19.121 ]' 00:22:19.121 18:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:19.379 18:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:19.379 18:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:19.379 18:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:19.379 18:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:19.379 18:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:19.379 18:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:19.379 18:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:19.636 18:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --hostid 0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-secret DHHC-1:01:ZGE0NTk0NWEyNjIzOWU1ODE5NDA3MDQ0MDY0NGE4ZWZdneta: --dhchap-ctrl-secret DHHC-1:02:ZDNmNzVmNmE2ZjBkZWJmMmJiOWYwNWNlODUxMDVmMGFlMDc1NmQwOWRlM2M0ZjE5utXpNg==: 00:22:20.568 18:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:20.568 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:20.568 18:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:22:20.568 18:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.568 18:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.568 18:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.568 18:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:20.568 18:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:20.568 18:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:20.826 18:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:22:20.826 18:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:20.826 18:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:20.826 18:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:20.826 18:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:20.826 18:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:20.826 18:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:20.826 18:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.827 18:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.827 18:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.827 18:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:20.827 18:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:21.392 00:22:21.392 18:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:21.392 18:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:21.392 18:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:21.654 18:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:21.654 18:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:21.654 18:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:21.654 18:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.911 18:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:21.911 18:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:21.911 { 00:22:21.911 "auth": { 00:22:21.911 "dhgroup": "ffdhe8192", 00:22:21.911 "digest": "sha384", 00:22:21.911 "state": "completed" 00:22:21.911 }, 00:22:21.911 "cntlid": 93, 00:22:21.911 "listen_address": { 00:22:21.911 "adrfam": "IPv4", 00:22:21.911 "traddr": "10.0.0.2", 00:22:21.911 "trsvcid": "4420", 00:22:21.911 "trtype": "TCP" 00:22:21.911 }, 00:22:21.911 "peer_address": { 00:22:21.911 "adrfam": "IPv4", 00:22:21.911 "traddr": "10.0.0.1", 00:22:21.911 "trsvcid": "48518", 00:22:21.911 "trtype": "TCP" 00:22:21.911 }, 00:22:21.911 "qid": 0, 00:22:21.911 "state": "enabled", 00:22:21.911 "thread": "nvmf_tgt_poll_group_000" 00:22:21.911 } 00:22:21.911 ]' 00:22:21.911 18:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:21.911 18:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:21.911 18:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:21.911 18:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:21.911 18:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:21.911 18:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:21.911 18:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:21.911 18:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:22.169 18:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --hostid 0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-secret DHHC-1:02:MGYzNzg3ZjM5Zjc5Njk4M2I4MGE0YWExOGYxYTVkYmUxNDY4NGRjZWY0NmZkNDdlNqeI4Q==: --dhchap-ctrl-secret DHHC-1:01:NDg0ZjRmMjdlODliODE5MTM0MjliNGZiMzgwZDZkZDfpWGLp: 00:22:23.103 18:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:23.103 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:23.103 18:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:22:23.103 18:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:23.103 18:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.103 18:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:23.103 18:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:23.103 18:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:23.103 18:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:23.361 18:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:22:23.361 18:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:23.361 18:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:23.361 18:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:23.361 18:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:23.361 18:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:23.361 18:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-key key3 00:22:23.361 18:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:23.361 18:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.361 18:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:23.361 18:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:23.361 18:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:23.926 00:22:23.926 18:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:23.926 18:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:23.926 18:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:24.492 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:24.492 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:24.492 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:24.492 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.492 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:24.492 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:24.492 { 00:22:24.492 "auth": { 00:22:24.492 "dhgroup": "ffdhe8192", 00:22:24.492 "digest": "sha384", 00:22:24.492 "state": "completed" 00:22:24.492 }, 00:22:24.492 "cntlid": 95, 00:22:24.492 "listen_address": { 00:22:24.492 "adrfam": "IPv4", 00:22:24.492 "traddr": "10.0.0.2", 00:22:24.492 "trsvcid": "4420", 00:22:24.492 "trtype": "TCP" 00:22:24.492 }, 00:22:24.492 "peer_address": { 00:22:24.492 "adrfam": "IPv4", 00:22:24.492 "traddr": "10.0.0.1", 00:22:24.492 "trsvcid": "38710", 00:22:24.492 "trtype": "TCP" 00:22:24.492 }, 00:22:24.492 "qid": 0, 00:22:24.492 "state": "enabled", 00:22:24.492 "thread": "nvmf_tgt_poll_group_000" 00:22:24.492 } 00:22:24.492 ]' 00:22:24.492 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:24.492 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:24.492 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:24.492 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:24.492 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:24.492 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:24.492 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:24.492 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:24.750 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --hostid 0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-secret DHHC-1:03:MTFhOTg2OTBjNTdlY2QyZjZjYzcwZDQ1YWJhZDM4NTFiNDQxY2U1OWQzMDVmNWIxMmJlYjlmOTFmNGM3NTRlMB/K07c=: 00:22:25.684 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:25.684 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:25.684 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:22:25.684 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.684 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.684 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.684 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:22:25.684 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:25.684 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:25.684 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:25.684 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:25.942 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:22:25.943 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:25.943 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:25.943 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:22:25.943 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:25.943 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:25.943 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:25.943 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.943 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.943 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.943 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:25.943 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:26.201 00:22:26.201 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:26.201 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:26.201 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:26.459 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:26.459 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:26.459 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:26.459 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.459 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:26.459 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:26.459 { 00:22:26.459 "auth": { 00:22:26.459 "dhgroup": "null", 00:22:26.459 "digest": "sha512", 00:22:26.459 "state": "completed" 00:22:26.459 }, 00:22:26.459 "cntlid": 97, 00:22:26.459 "listen_address": { 00:22:26.459 "adrfam": "IPv4", 00:22:26.459 "traddr": "10.0.0.2", 00:22:26.459 "trsvcid": "4420", 00:22:26.459 "trtype": "TCP" 00:22:26.459 }, 00:22:26.459 "peer_address": { 00:22:26.459 "adrfam": "IPv4", 00:22:26.459 "traddr": "10.0.0.1", 00:22:26.459 "trsvcid": "38738", 00:22:26.459 "trtype": "TCP" 00:22:26.459 }, 00:22:26.459 "qid": 0, 00:22:26.459 "state": "enabled", 00:22:26.459 "thread": "nvmf_tgt_poll_group_000" 00:22:26.459 } 00:22:26.459 ]' 00:22:26.459 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:26.459 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:26.459 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:26.459 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:22:26.459 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:26.729 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:26.729 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:26.729 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:26.999 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --hostid 0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-secret DHHC-1:00:OTA0ZWZlYWE4NmVkZTI2N2EzZTU3NGNkOWMxZWJjNDA4ZWFjNmFlMTAwZDRhMzYy01x3lw==: --dhchap-ctrl-secret DHHC-1:03:OTBjMGE0YmU2NDdjZjYzMDU0NGU0MzE4Y2FjMTQwMWE2NTgyY2RlMGRlM2EwMWExMzQ0N2Y2N2M4N2YwZGNmNWRVl8M=: 00:22:27.564 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:27.564 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:27.564 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:22:27.564 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:27.564 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.564 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:27.564 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:27.564 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:27.564 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:27.823 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:22:27.823 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:27.823 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:27.823 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:22:27.823 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:27.823 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:27.823 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:27.823 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:27.823 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.823 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:27.823 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:27.823 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:28.081 00:22:28.081 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:28.081 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:28.081 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:28.646 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:28.646 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:28.646 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:28.646 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.646 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:28.646 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:28.646 { 00:22:28.646 "auth": { 00:22:28.646 "dhgroup": "null", 00:22:28.646 "digest": "sha512", 00:22:28.646 "state": "completed" 00:22:28.646 }, 00:22:28.646 "cntlid": 99, 00:22:28.646 "listen_address": { 00:22:28.646 "adrfam": "IPv4", 00:22:28.646 "traddr": "10.0.0.2", 00:22:28.646 "trsvcid": "4420", 00:22:28.646 "trtype": "TCP" 00:22:28.646 }, 00:22:28.646 "peer_address": { 00:22:28.646 "adrfam": "IPv4", 00:22:28.646 "traddr": "10.0.0.1", 00:22:28.646 "trsvcid": "38768", 00:22:28.646 "trtype": "TCP" 00:22:28.646 }, 00:22:28.646 "qid": 0, 00:22:28.646 "state": "enabled", 00:22:28.646 "thread": "nvmf_tgt_poll_group_000" 00:22:28.646 } 00:22:28.646 ]' 00:22:28.646 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:28.646 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:28.646 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:28.646 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:22:28.646 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:28.646 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:28.646 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:28.646 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:28.904 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --hostid 0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-secret DHHC-1:01:ZGE0NTk0NWEyNjIzOWU1ODE5NDA3MDQ0MDY0NGE4ZWZdneta: --dhchap-ctrl-secret DHHC-1:02:ZDNmNzVmNmE2ZjBkZWJmMmJiOWYwNWNlODUxMDVmMGFlMDc1NmQwOWRlM2M0ZjE5utXpNg==: 00:22:29.837 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:29.837 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:29.837 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:22:29.837 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:29.837 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.837 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:29.837 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:29.837 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:29.837 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:30.095 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:22:30.095 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:30.095 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:30.095 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:22:30.095 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:30.095 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:30.095 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:30.095 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:30.095 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:30.095 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:30.095 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:30.095 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:30.353 00:22:30.353 18:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:30.353 18:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:30.353 18:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:30.920 18:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:30.920 18:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:30.920 18:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:30.920 18:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:30.920 18:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:30.920 18:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:30.920 { 00:22:30.920 "auth": { 00:22:30.920 "dhgroup": "null", 00:22:30.920 "digest": "sha512", 00:22:30.920 "state": "completed" 00:22:30.920 }, 00:22:30.920 "cntlid": 101, 00:22:30.920 "listen_address": { 00:22:30.920 "adrfam": "IPv4", 00:22:30.920 "traddr": "10.0.0.2", 00:22:30.920 "trsvcid": "4420", 00:22:30.920 "trtype": "TCP" 00:22:30.920 }, 00:22:30.920 "peer_address": { 00:22:30.920 "adrfam": "IPv4", 00:22:30.920 "traddr": "10.0.0.1", 00:22:30.920 "trsvcid": "38806", 00:22:30.920 "trtype": "TCP" 00:22:30.920 }, 00:22:30.920 "qid": 0, 00:22:30.920 "state": "enabled", 00:22:30.920 "thread": "nvmf_tgt_poll_group_000" 00:22:30.920 } 00:22:30.920 ]' 00:22:30.920 18:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:30.920 18:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:30.920 18:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:30.920 18:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:22:30.920 18:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:30.920 18:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:30.920 18:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:30.920 18:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:31.179 18:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --hostid 0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-secret DHHC-1:02:MGYzNzg3ZjM5Zjc5Njk4M2I4MGE0YWExOGYxYTVkYmUxNDY4NGRjZWY0NmZkNDdlNqeI4Q==: --dhchap-ctrl-secret DHHC-1:01:NDg0ZjRmMjdlODliODE5MTM0MjliNGZiMzgwZDZkZDfpWGLp: 00:22:32.180 18:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:32.180 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:32.180 18:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:22:32.180 18:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:32.180 18:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:32.180 18:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:32.181 18:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:32.181 18:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:32.181 18:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:32.181 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:22:32.181 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:32.181 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:32.181 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:22:32.181 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:32.181 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:32.181 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-key key3 00:22:32.181 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:32.181 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:32.181 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:32.181 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:32.181 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:32.439 00:22:32.439 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:32.439 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:32.439 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:33.005 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:33.005 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:33.005 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:33.005 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:33.005 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:33.005 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:33.005 { 00:22:33.005 "auth": { 00:22:33.005 "dhgroup": "null", 00:22:33.005 "digest": "sha512", 00:22:33.005 "state": "completed" 00:22:33.005 }, 00:22:33.005 "cntlid": 103, 00:22:33.005 "listen_address": { 00:22:33.005 "adrfam": "IPv4", 00:22:33.005 "traddr": "10.0.0.2", 00:22:33.005 "trsvcid": "4420", 00:22:33.005 "trtype": "TCP" 00:22:33.005 }, 00:22:33.005 "peer_address": { 00:22:33.005 "adrfam": "IPv4", 00:22:33.005 "traddr": "10.0.0.1", 00:22:33.005 "trsvcid": "55818", 00:22:33.005 "trtype": "TCP" 00:22:33.005 }, 00:22:33.005 "qid": 0, 00:22:33.005 "state": "enabled", 00:22:33.005 "thread": "nvmf_tgt_poll_group_000" 00:22:33.005 } 00:22:33.005 ]' 00:22:33.005 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:33.005 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:33.005 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:33.005 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:22:33.005 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:33.005 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:33.005 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:33.005 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:33.263 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --hostid 0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-secret DHHC-1:03:MTFhOTg2OTBjNTdlY2QyZjZjYzcwZDQ1YWJhZDM4NTFiNDQxY2U1OWQzMDVmNWIxMmJlYjlmOTFmNGM3NTRlMB/K07c=: 00:22:34.207 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:34.207 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:34.207 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:22:34.207 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:34.207 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:34.207 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:34.207 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:34.207 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:34.207 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:34.207 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:34.207 18:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:22:34.207 18:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:34.207 18:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:34.207 18:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:22:34.207 18:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:34.207 18:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:34.207 18:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:34.207 18:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:34.207 18:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:34.465 18:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:34.465 18:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:34.465 18:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:34.723 00:22:34.723 18:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:34.723 18:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:34.723 18:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:34.992 18:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:34.992 18:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:34.992 18:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:34.992 18:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:34.992 18:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:34.992 18:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:34.992 { 00:22:34.992 "auth": { 00:22:34.992 "dhgroup": "ffdhe2048", 00:22:34.992 "digest": "sha512", 00:22:34.992 "state": "completed" 00:22:34.992 }, 00:22:34.992 "cntlid": 105, 00:22:34.992 "listen_address": { 00:22:34.992 "adrfam": "IPv4", 00:22:34.992 "traddr": "10.0.0.2", 00:22:34.992 "trsvcid": "4420", 00:22:34.992 "trtype": "TCP" 00:22:34.992 }, 00:22:34.992 "peer_address": { 00:22:34.992 "adrfam": "IPv4", 00:22:34.992 "traddr": "10.0.0.1", 00:22:34.992 "trsvcid": "55848", 00:22:34.992 "trtype": "TCP" 00:22:34.992 }, 00:22:34.992 "qid": 0, 00:22:34.992 "state": "enabled", 00:22:34.992 "thread": "nvmf_tgt_poll_group_000" 00:22:34.992 } 00:22:34.992 ]' 00:22:34.992 18:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:34.992 18:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:34.992 18:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:34.992 18:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:34.992 18:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:35.265 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:35.265 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:35.265 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:35.265 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --hostid 0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-secret DHHC-1:00:OTA0ZWZlYWE4NmVkZTI2N2EzZTU3NGNkOWMxZWJjNDA4ZWFjNmFlMTAwZDRhMzYy01x3lw==: --dhchap-ctrl-secret DHHC-1:03:OTBjMGE0YmU2NDdjZjYzMDU0NGU0MzE4Y2FjMTQwMWE2NTgyY2RlMGRlM2EwMWExMzQ0N2Y2N2M4N2YwZGNmNWRVl8M=: 00:22:36.200 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:36.200 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:36.200 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:22:36.200 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.200 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.200 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.200 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:36.200 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:36.200 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:36.458 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:22:36.458 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:36.458 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:36.458 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:22:36.458 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:36.458 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:36.458 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:36.458 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.458 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.458 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.458 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:36.458 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:36.715 00:22:36.715 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:36.715 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:36.715 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:36.973 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:36.973 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:36.973 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.973 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.973 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.973 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:36.973 { 00:22:36.973 "auth": { 00:22:36.973 "dhgroup": "ffdhe2048", 00:22:36.973 "digest": "sha512", 00:22:36.973 "state": "completed" 00:22:36.973 }, 00:22:36.973 "cntlid": 107, 00:22:36.973 "listen_address": { 00:22:36.973 "adrfam": "IPv4", 00:22:36.973 "traddr": "10.0.0.2", 00:22:36.973 "trsvcid": "4420", 00:22:36.973 "trtype": "TCP" 00:22:36.973 }, 00:22:36.973 "peer_address": { 00:22:36.973 "adrfam": "IPv4", 00:22:36.973 "traddr": "10.0.0.1", 00:22:36.973 "trsvcid": "55870", 00:22:36.973 "trtype": "TCP" 00:22:36.973 }, 00:22:36.973 "qid": 0, 00:22:36.973 "state": "enabled", 00:22:36.973 "thread": "nvmf_tgt_poll_group_000" 00:22:36.973 } 00:22:36.973 ]' 00:22:36.973 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:37.230 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:37.230 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:37.230 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:37.230 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:37.230 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:37.230 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:37.230 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:37.488 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --hostid 0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-secret DHHC-1:01:ZGE0NTk0NWEyNjIzOWU1ODE5NDA3MDQ0MDY0NGE4ZWZdneta: --dhchap-ctrl-secret DHHC-1:02:ZDNmNzVmNmE2ZjBkZWJmMmJiOWYwNWNlODUxMDVmMGFlMDc1NmQwOWRlM2M0ZjE5utXpNg==: 00:22:38.422 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:38.422 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:38.422 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:22:38.422 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.422 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:38.422 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:38.422 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:38.422 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:38.422 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:38.422 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:22:38.422 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:38.422 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:38.422 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:22:38.422 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:38.422 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:38.422 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:38.422 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.422 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:38.422 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:38.422 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:38.422 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:38.988 00:22:38.988 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:38.988 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:38.988 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:39.286 18:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:39.286 18:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:39.286 18:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.286 18:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:39.286 18:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.286 18:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:39.286 { 00:22:39.286 "auth": { 00:22:39.286 "dhgroup": "ffdhe2048", 00:22:39.286 "digest": "sha512", 00:22:39.286 "state": "completed" 00:22:39.286 }, 00:22:39.286 "cntlid": 109, 00:22:39.286 "listen_address": { 00:22:39.286 "adrfam": "IPv4", 00:22:39.286 "traddr": "10.0.0.2", 00:22:39.286 "trsvcid": "4420", 00:22:39.286 "trtype": "TCP" 00:22:39.286 }, 00:22:39.286 "peer_address": { 00:22:39.286 "adrfam": "IPv4", 00:22:39.286 "traddr": "10.0.0.1", 00:22:39.286 "trsvcid": "55904", 00:22:39.286 "trtype": "TCP" 00:22:39.286 }, 00:22:39.286 "qid": 0, 00:22:39.286 "state": "enabled", 00:22:39.286 "thread": "nvmf_tgt_poll_group_000" 00:22:39.286 } 00:22:39.286 ]' 00:22:39.286 18:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:39.286 18:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:39.286 18:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:39.286 18:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:39.286 18:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:39.546 18:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:39.546 18:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:39.546 18:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:39.804 18:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --hostid 0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-secret DHHC-1:02:MGYzNzg3ZjM5Zjc5Njk4M2I4MGE0YWExOGYxYTVkYmUxNDY4NGRjZWY0NmZkNDdlNqeI4Q==: --dhchap-ctrl-secret DHHC-1:01:NDg0ZjRmMjdlODliODE5MTM0MjliNGZiMzgwZDZkZDfpWGLp: 00:22:40.370 18:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:40.370 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:40.370 18:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:22:40.370 18:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.370 18:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:40.370 18:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:40.370 18:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:40.370 18:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:40.370 18:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:40.629 18:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:22:40.629 18:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:40.629 18:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:40.629 18:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:22:40.629 18:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:40.629 18:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:40.629 18:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-key key3 00:22:40.629 18:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.629 18:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:40.629 18:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:40.629 18:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:40.629 18:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:40.887 00:22:41.145 18:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:41.145 18:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:41.145 18:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:41.403 18:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:41.403 18:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:41.403 18:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:41.403 18:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:41.403 18:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:41.403 18:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:41.403 { 00:22:41.403 "auth": { 00:22:41.403 "dhgroup": "ffdhe2048", 00:22:41.403 "digest": "sha512", 00:22:41.403 "state": "completed" 00:22:41.403 }, 00:22:41.403 "cntlid": 111, 00:22:41.403 "listen_address": { 00:22:41.403 "adrfam": "IPv4", 00:22:41.403 "traddr": "10.0.0.2", 00:22:41.403 "trsvcid": "4420", 00:22:41.403 "trtype": "TCP" 00:22:41.403 }, 00:22:41.403 "peer_address": { 00:22:41.403 "adrfam": "IPv4", 00:22:41.403 "traddr": "10.0.0.1", 00:22:41.403 "trsvcid": "55938", 00:22:41.403 "trtype": "TCP" 00:22:41.403 }, 00:22:41.403 "qid": 0, 00:22:41.403 "state": "enabled", 00:22:41.403 "thread": "nvmf_tgt_poll_group_000" 00:22:41.403 } 00:22:41.403 ]' 00:22:41.403 18:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:41.403 18:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:41.403 18:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:41.403 18:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:41.403 18:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:41.403 18:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:41.403 18:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:41.403 18:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:41.661 18:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --hostid 0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-secret DHHC-1:03:MTFhOTg2OTBjNTdlY2QyZjZjYzcwZDQ1YWJhZDM4NTFiNDQxY2U1OWQzMDVmNWIxMmJlYjlmOTFmNGM3NTRlMB/K07c=: 00:22:42.595 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:42.595 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:42.595 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:22:42.595 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.595 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:42.595 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.595 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:42.595 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:42.595 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:42.595 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:42.595 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:22:42.595 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:42.595 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:42.595 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:22:42.595 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:42.595 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:42.595 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:42.595 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.595 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:42.595 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.595 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:42.595 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:43.161 00:22:43.161 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:43.161 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:43.161 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:43.418 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:43.418 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:43.418 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:43.418 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:43.418 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:43.418 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:43.418 { 00:22:43.418 "auth": { 00:22:43.418 "dhgroup": "ffdhe3072", 00:22:43.418 "digest": "sha512", 00:22:43.418 "state": "completed" 00:22:43.418 }, 00:22:43.418 "cntlid": 113, 00:22:43.418 "listen_address": { 00:22:43.418 "adrfam": "IPv4", 00:22:43.418 "traddr": "10.0.0.2", 00:22:43.418 "trsvcid": "4420", 00:22:43.418 "trtype": "TCP" 00:22:43.418 }, 00:22:43.418 "peer_address": { 00:22:43.418 "adrfam": "IPv4", 00:22:43.418 "traddr": "10.0.0.1", 00:22:43.418 "trsvcid": "41830", 00:22:43.418 "trtype": "TCP" 00:22:43.418 }, 00:22:43.418 "qid": 0, 00:22:43.418 "state": "enabled", 00:22:43.418 "thread": "nvmf_tgt_poll_group_000" 00:22:43.418 } 00:22:43.418 ]' 00:22:43.418 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:43.418 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:43.418 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:43.418 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:43.418 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:43.676 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:43.676 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:43.676 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:43.933 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --hostid 0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-secret DHHC-1:00:OTA0ZWZlYWE4NmVkZTI2N2EzZTU3NGNkOWMxZWJjNDA4ZWFjNmFlMTAwZDRhMzYy01x3lw==: --dhchap-ctrl-secret DHHC-1:03:OTBjMGE0YmU2NDdjZjYzMDU0NGU0MzE4Y2FjMTQwMWE2NTgyY2RlMGRlM2EwMWExMzQ0N2Y2N2M4N2YwZGNmNWRVl8M=: 00:22:44.498 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:44.498 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:44.498 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:22:44.498 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.498 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:44.756 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.756 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:44.756 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:44.756 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:45.014 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:22:45.014 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:45.014 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:45.014 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:22:45.014 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:45.014 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:45.014 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:45.014 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.014 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:45.014 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.014 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:45.014 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:45.272 00:22:45.272 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:45.272 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:45.272 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:45.530 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:45.530 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:45.530 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.530 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:45.530 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.530 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:45.530 { 00:22:45.530 "auth": { 00:22:45.530 "dhgroup": "ffdhe3072", 00:22:45.530 "digest": "sha512", 00:22:45.530 "state": "completed" 00:22:45.530 }, 00:22:45.530 "cntlid": 115, 00:22:45.530 "listen_address": { 00:22:45.530 "adrfam": "IPv4", 00:22:45.530 "traddr": "10.0.0.2", 00:22:45.530 "trsvcid": "4420", 00:22:45.530 "trtype": "TCP" 00:22:45.530 }, 00:22:45.530 "peer_address": { 00:22:45.530 "adrfam": "IPv4", 00:22:45.530 "traddr": "10.0.0.1", 00:22:45.530 "trsvcid": "41864", 00:22:45.530 "trtype": "TCP" 00:22:45.530 }, 00:22:45.530 "qid": 0, 00:22:45.530 "state": "enabled", 00:22:45.530 "thread": "nvmf_tgt_poll_group_000" 00:22:45.530 } 00:22:45.530 ]' 00:22:45.530 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:45.788 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:45.788 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:45.788 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:45.788 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:45.788 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:45.788 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:45.788 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:46.046 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --hostid 0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-secret DHHC-1:01:ZGE0NTk0NWEyNjIzOWU1ODE5NDA3MDQ0MDY0NGE4ZWZdneta: --dhchap-ctrl-secret DHHC-1:02:ZDNmNzVmNmE2ZjBkZWJmMmJiOWYwNWNlODUxMDVmMGFlMDc1NmQwOWRlM2M0ZjE5utXpNg==: 00:22:46.652 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:46.652 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:46.652 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:22:46.652 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.652 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:46.652 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.652 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:46.652 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:46.652 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:46.911 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:22:46.911 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:46.911 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:46.911 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:22:46.911 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:46.911 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:47.170 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:47.170 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:47.170 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:47.170 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:47.170 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:47.170 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:47.429 00:22:47.429 18:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:47.429 18:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:47.429 18:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:47.688 18:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:47.688 18:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:47.688 18:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:47.688 18:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:47.688 18:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:47.688 18:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:47.688 { 00:22:47.688 "auth": { 00:22:47.688 "dhgroup": "ffdhe3072", 00:22:47.688 "digest": "sha512", 00:22:47.688 "state": "completed" 00:22:47.688 }, 00:22:47.688 "cntlid": 117, 00:22:47.688 "listen_address": { 00:22:47.688 "adrfam": "IPv4", 00:22:47.688 "traddr": "10.0.0.2", 00:22:47.688 "trsvcid": "4420", 00:22:47.688 "trtype": "TCP" 00:22:47.688 }, 00:22:47.688 "peer_address": { 00:22:47.688 "adrfam": "IPv4", 00:22:47.688 "traddr": "10.0.0.1", 00:22:47.688 "trsvcid": "41898", 00:22:47.688 "trtype": "TCP" 00:22:47.688 }, 00:22:47.688 "qid": 0, 00:22:47.688 "state": "enabled", 00:22:47.688 "thread": "nvmf_tgt_poll_group_000" 00:22:47.688 } 00:22:47.688 ]' 00:22:47.688 18:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:47.688 18:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:47.688 18:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:47.947 18:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:47.947 18:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:47.947 18:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:47.947 18:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:47.947 18:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:48.205 18:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --hostid 0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-secret DHHC-1:02:MGYzNzg3ZjM5Zjc5Njk4M2I4MGE0YWExOGYxYTVkYmUxNDY4NGRjZWY0NmZkNDdlNqeI4Q==: --dhchap-ctrl-secret DHHC-1:01:NDg0ZjRmMjdlODliODE5MTM0MjliNGZiMzgwZDZkZDfpWGLp: 00:22:48.772 18:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:48.772 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:48.772 18:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:22:48.772 18:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:48.772 18:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:48.772 18:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:48.772 18:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:48.772 18:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:48.772 18:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:49.339 18:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:22:49.339 18:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:49.339 18:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:49.339 18:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:22:49.339 18:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:49.339 18:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:49.339 18:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-key key3 00:22:49.339 18:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:49.339 18:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:49.339 18:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:49.339 18:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:49.339 18:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:49.597 00:22:49.597 18:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:49.597 18:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:49.597 18:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:49.855 18:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:49.855 18:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:49.855 18:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:49.855 18:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:49.855 18:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:49.855 18:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:49.855 { 00:22:49.855 "auth": { 00:22:49.855 "dhgroup": "ffdhe3072", 00:22:49.855 "digest": "sha512", 00:22:49.855 "state": "completed" 00:22:49.855 }, 00:22:49.855 "cntlid": 119, 00:22:49.855 "listen_address": { 00:22:49.855 "adrfam": "IPv4", 00:22:49.855 "traddr": "10.0.0.2", 00:22:49.855 "trsvcid": "4420", 00:22:49.855 "trtype": "TCP" 00:22:49.855 }, 00:22:49.855 "peer_address": { 00:22:49.855 "adrfam": "IPv4", 00:22:49.855 "traddr": "10.0.0.1", 00:22:49.855 "trsvcid": "41932", 00:22:49.855 "trtype": "TCP" 00:22:49.855 }, 00:22:49.855 "qid": 0, 00:22:49.855 "state": "enabled", 00:22:49.855 "thread": "nvmf_tgt_poll_group_000" 00:22:49.855 } 00:22:49.855 ]' 00:22:49.855 18:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:49.855 18:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:49.855 18:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:49.855 18:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:49.855 18:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:50.112 18:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:50.113 18:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:50.113 18:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:50.371 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --hostid 0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-secret DHHC-1:03:MTFhOTg2OTBjNTdlY2QyZjZjYzcwZDQ1YWJhZDM4NTFiNDQxY2U1OWQzMDVmNWIxMmJlYjlmOTFmNGM3NTRlMB/K07c=: 00:22:50.937 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:50.937 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:50.937 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:22:50.937 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:50.937 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:50.937 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:50.937 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:50.937 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:50.937 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:50.937 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:51.196 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:22:51.196 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:51.196 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:51.196 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:22:51.196 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:51.196 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:51.196 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:51.196 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:51.196 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:51.196 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:51.196 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:51.196 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:51.457 00:22:51.716 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:51.716 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:51.716 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:51.973 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:51.973 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:51.973 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:51.973 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:51.973 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:51.973 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:51.973 { 00:22:51.973 "auth": { 00:22:51.973 "dhgroup": "ffdhe4096", 00:22:51.973 "digest": "sha512", 00:22:51.973 "state": "completed" 00:22:51.973 }, 00:22:51.973 "cntlid": 121, 00:22:51.973 "listen_address": { 00:22:51.974 "adrfam": "IPv4", 00:22:51.974 "traddr": "10.0.0.2", 00:22:51.974 "trsvcid": "4420", 00:22:51.974 "trtype": "TCP" 00:22:51.974 }, 00:22:51.974 "peer_address": { 00:22:51.974 "adrfam": "IPv4", 00:22:51.974 "traddr": "10.0.0.1", 00:22:51.974 "trsvcid": "56484", 00:22:51.974 "trtype": "TCP" 00:22:51.974 }, 00:22:51.974 "qid": 0, 00:22:51.974 "state": "enabled", 00:22:51.974 "thread": "nvmf_tgt_poll_group_000" 00:22:51.974 } 00:22:51.974 ]' 00:22:51.974 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:51.974 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:51.974 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:51.974 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:51.974 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:51.974 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:51.974 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:51.974 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:52.232 18:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --hostid 0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-secret DHHC-1:00:OTA0ZWZlYWE4NmVkZTI2N2EzZTU3NGNkOWMxZWJjNDA4ZWFjNmFlMTAwZDRhMzYy01x3lw==: --dhchap-ctrl-secret DHHC-1:03:OTBjMGE0YmU2NDdjZjYzMDU0NGU0MzE4Y2FjMTQwMWE2NTgyY2RlMGRlM2EwMWExMzQ0N2Y2N2M4N2YwZGNmNWRVl8M=: 00:22:53.192 18:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:53.192 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:53.192 18:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:22:53.192 18:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:53.192 18:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:53.192 18:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:53.192 18:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:53.192 18:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:53.192 18:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:53.192 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:22:53.192 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:53.192 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:53.192 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:22:53.192 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:53.192 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:53.192 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:53.192 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:53.192 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:53.192 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:53.192 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:53.192 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:53.758 00:22:53.758 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:53.758 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:53.758 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:54.017 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:54.017 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:54.017 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:54.017 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:54.017 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:54.017 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:54.017 { 00:22:54.017 "auth": { 00:22:54.017 "dhgroup": "ffdhe4096", 00:22:54.017 "digest": "sha512", 00:22:54.017 "state": "completed" 00:22:54.017 }, 00:22:54.017 "cntlid": 123, 00:22:54.017 "listen_address": { 00:22:54.017 "adrfam": "IPv4", 00:22:54.017 "traddr": "10.0.0.2", 00:22:54.017 "trsvcid": "4420", 00:22:54.017 "trtype": "TCP" 00:22:54.017 }, 00:22:54.017 "peer_address": { 00:22:54.017 "adrfam": "IPv4", 00:22:54.017 "traddr": "10.0.0.1", 00:22:54.017 "trsvcid": "56510", 00:22:54.017 "trtype": "TCP" 00:22:54.017 }, 00:22:54.017 "qid": 0, 00:22:54.017 "state": "enabled", 00:22:54.017 "thread": "nvmf_tgt_poll_group_000" 00:22:54.017 } 00:22:54.017 ]' 00:22:54.017 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:54.017 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:54.017 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:54.017 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:54.017 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:54.017 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:54.017 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:54.017 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:54.275 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --hostid 0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-secret DHHC-1:01:ZGE0NTk0NWEyNjIzOWU1ODE5NDA3MDQ0MDY0NGE4ZWZdneta: --dhchap-ctrl-secret DHHC-1:02:ZDNmNzVmNmE2ZjBkZWJmMmJiOWYwNWNlODUxMDVmMGFlMDc1NmQwOWRlM2M0ZjE5utXpNg==: 00:22:54.904 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:54.904 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:54.904 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:22:54.904 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:54.904 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:54.904 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:54.904 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:54.904 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:54.904 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:55.467 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:22:55.467 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:55.467 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:55.467 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:22:55.467 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:55.468 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:55.468 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:55.468 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.468 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:55.468 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:55.468 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:55.468 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:55.725 00:22:55.725 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:55.725 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:55.725 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:55.982 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:55.982 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:55.982 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.982 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:55.982 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:55.982 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:55.982 { 00:22:55.982 "auth": { 00:22:55.982 "dhgroup": "ffdhe4096", 00:22:55.982 "digest": "sha512", 00:22:55.982 "state": "completed" 00:22:55.982 }, 00:22:55.982 "cntlid": 125, 00:22:55.982 "listen_address": { 00:22:55.982 "adrfam": "IPv4", 00:22:55.982 "traddr": "10.0.0.2", 00:22:55.982 "trsvcid": "4420", 00:22:55.982 "trtype": "TCP" 00:22:55.982 }, 00:22:55.982 "peer_address": { 00:22:55.982 "adrfam": "IPv4", 00:22:55.982 "traddr": "10.0.0.1", 00:22:55.982 "trsvcid": "56552", 00:22:55.982 "trtype": "TCP" 00:22:55.982 }, 00:22:55.982 "qid": 0, 00:22:55.982 "state": "enabled", 00:22:55.982 "thread": "nvmf_tgt_poll_group_000" 00:22:55.982 } 00:22:55.982 ]' 00:22:55.982 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:55.982 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:55.982 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:55.982 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:55.982 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:55.982 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:55.982 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:55.982 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:56.239 18:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --hostid 0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-secret DHHC-1:02:MGYzNzg3ZjM5Zjc5Njk4M2I4MGE0YWExOGYxYTVkYmUxNDY4NGRjZWY0NmZkNDdlNqeI4Q==: --dhchap-ctrl-secret DHHC-1:01:NDg0ZjRmMjdlODliODE5MTM0MjliNGZiMzgwZDZkZDfpWGLp: 00:22:57.169 18:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:57.169 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:57.169 18:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:22:57.169 18:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:57.169 18:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:57.169 18:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:57.169 18:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:57.169 18:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:57.169 18:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:57.450 18:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:22:57.450 18:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:57.450 18:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:57.450 18:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:22:57.450 18:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:57.450 18:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:57.450 18:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-key key3 00:22:57.450 18:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:57.450 18:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:57.450 18:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:57.450 18:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:57.450 18:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:57.720 00:22:57.720 18:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:57.720 18:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:57.720 18:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:57.978 18:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:57.978 18:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:57.978 18:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:57.978 18:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:57.978 18:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:57.978 18:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:57.978 { 00:22:57.978 "auth": { 00:22:57.978 "dhgroup": "ffdhe4096", 00:22:57.978 "digest": "sha512", 00:22:57.978 "state": "completed" 00:22:57.978 }, 00:22:57.978 "cntlid": 127, 00:22:57.978 "listen_address": { 00:22:57.978 "adrfam": "IPv4", 00:22:57.978 "traddr": "10.0.0.2", 00:22:57.978 "trsvcid": "4420", 00:22:57.978 "trtype": "TCP" 00:22:57.978 }, 00:22:57.978 "peer_address": { 00:22:57.978 "adrfam": "IPv4", 00:22:57.978 "traddr": "10.0.0.1", 00:22:57.978 "trsvcid": "56568", 00:22:57.978 "trtype": "TCP" 00:22:57.978 }, 00:22:57.978 "qid": 0, 00:22:57.978 "state": "enabled", 00:22:57.978 "thread": "nvmf_tgt_poll_group_000" 00:22:57.978 } 00:22:57.978 ]' 00:22:57.978 18:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:57.978 18:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:57.978 18:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:58.236 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:58.236 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:58.236 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:58.236 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:58.236 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:58.494 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --hostid 0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-secret DHHC-1:03:MTFhOTg2OTBjNTdlY2QyZjZjYzcwZDQ1YWJhZDM4NTFiNDQxY2U1OWQzMDVmNWIxMmJlYjlmOTFmNGM3NTRlMB/K07c=: 00:22:59.059 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:59.059 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:59.059 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:22:59.059 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:59.059 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:59.059 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:59.059 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:59.059 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:59.059 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:59.059 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:59.318 18:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:22:59.318 18:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:59.318 18:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:59.318 18:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:59.318 18:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:59.319 18:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:59.319 18:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:59.319 18:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:59.319 18:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:59.319 18:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:59.319 18:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:59.319 18:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:59.889 00:22:59.889 18:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:59.889 18:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:59.889 18:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:00.158 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:00.158 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:00.158 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:00.158 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:00.158 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:00.158 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:00.158 { 00:23:00.158 "auth": { 00:23:00.158 "dhgroup": "ffdhe6144", 00:23:00.158 "digest": "sha512", 00:23:00.158 "state": "completed" 00:23:00.158 }, 00:23:00.158 "cntlid": 129, 00:23:00.158 "listen_address": { 00:23:00.158 "adrfam": "IPv4", 00:23:00.158 "traddr": "10.0.0.2", 00:23:00.158 "trsvcid": "4420", 00:23:00.158 "trtype": "TCP" 00:23:00.158 }, 00:23:00.158 "peer_address": { 00:23:00.158 "adrfam": "IPv4", 00:23:00.158 "traddr": "10.0.0.1", 00:23:00.158 "trsvcid": "56588", 00:23:00.158 "trtype": "TCP" 00:23:00.158 }, 00:23:00.158 "qid": 0, 00:23:00.158 "state": "enabled", 00:23:00.158 "thread": "nvmf_tgt_poll_group_000" 00:23:00.158 } 00:23:00.158 ]' 00:23:00.158 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:00.158 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:00.158 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:00.158 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:00.158 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:00.158 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:00.158 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:00.158 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:00.416 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --hostid 0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-secret DHHC-1:00:OTA0ZWZlYWE4NmVkZTI2N2EzZTU3NGNkOWMxZWJjNDA4ZWFjNmFlMTAwZDRhMzYy01x3lw==: --dhchap-ctrl-secret DHHC-1:03:OTBjMGE0YmU2NDdjZjYzMDU0NGU0MzE4Y2FjMTQwMWE2NTgyY2RlMGRlM2EwMWExMzQ0N2Y2N2M4N2YwZGNmNWRVl8M=: 00:23:01.350 18:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:01.350 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:01.350 18:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:23:01.350 18:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:01.350 18:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:01.350 18:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.350 18:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:01.350 18:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:01.350 18:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:01.608 18:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:23:01.608 18:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:01.608 18:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:01.608 18:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:23:01.608 18:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:23:01.608 18:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:01.608 18:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:01.608 18:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:01.608 18:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:01.608 18:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.608 18:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:01.608 18:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:01.866 00:23:01.866 18:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:01.866 18:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:01.866 18:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:02.433 18:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:02.433 18:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:02.433 18:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:02.433 18:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:02.433 18:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:02.433 18:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:02.433 { 00:23:02.433 "auth": { 00:23:02.433 "dhgroup": "ffdhe6144", 00:23:02.433 "digest": "sha512", 00:23:02.433 "state": "completed" 00:23:02.433 }, 00:23:02.433 "cntlid": 131, 00:23:02.433 "listen_address": { 00:23:02.433 "adrfam": "IPv4", 00:23:02.433 "traddr": "10.0.0.2", 00:23:02.433 "trsvcid": "4420", 00:23:02.433 "trtype": "TCP" 00:23:02.433 }, 00:23:02.433 "peer_address": { 00:23:02.433 "adrfam": "IPv4", 00:23:02.433 "traddr": "10.0.0.1", 00:23:02.433 "trsvcid": "57718", 00:23:02.433 "trtype": "TCP" 00:23:02.433 }, 00:23:02.433 "qid": 0, 00:23:02.433 "state": "enabled", 00:23:02.433 "thread": "nvmf_tgt_poll_group_000" 00:23:02.433 } 00:23:02.433 ]' 00:23:02.433 18:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:02.433 18:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:02.433 18:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:02.433 18:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:02.433 18:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:02.433 18:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:02.433 18:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:02.433 18:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:02.691 18:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --hostid 0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-secret DHHC-1:01:ZGE0NTk0NWEyNjIzOWU1ODE5NDA3MDQ0MDY0NGE4ZWZdneta: --dhchap-ctrl-secret DHHC-1:02:ZDNmNzVmNmE2ZjBkZWJmMmJiOWYwNWNlODUxMDVmMGFlMDc1NmQwOWRlM2M0ZjE5utXpNg==: 00:23:03.624 18:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:03.624 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:03.624 18:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:23:03.624 18:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.624 18:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:03.624 18:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.624 18:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:03.624 18:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:03.624 18:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:03.624 18:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:23:03.624 18:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:03.624 18:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:03.624 18:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:23:03.624 18:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:23:03.624 18:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:03.624 18:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:03.624 18:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.624 18:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:03.882 18:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.882 18:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:03.882 18:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:04.140 00:23:04.140 18:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:04.140 18:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:04.140 18:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:04.399 18:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:04.399 18:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:04.399 18:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:04.399 18:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:04.657 18:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:04.657 18:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:04.657 { 00:23:04.657 "auth": { 00:23:04.657 "dhgroup": "ffdhe6144", 00:23:04.657 "digest": "sha512", 00:23:04.657 "state": "completed" 00:23:04.657 }, 00:23:04.657 "cntlid": 133, 00:23:04.657 "listen_address": { 00:23:04.657 "adrfam": "IPv4", 00:23:04.657 "traddr": "10.0.0.2", 00:23:04.657 "trsvcid": "4420", 00:23:04.657 "trtype": "TCP" 00:23:04.657 }, 00:23:04.657 "peer_address": { 00:23:04.657 "adrfam": "IPv4", 00:23:04.657 "traddr": "10.0.0.1", 00:23:04.657 "trsvcid": "57750", 00:23:04.657 "trtype": "TCP" 00:23:04.657 }, 00:23:04.657 "qid": 0, 00:23:04.657 "state": "enabled", 00:23:04.657 "thread": "nvmf_tgt_poll_group_000" 00:23:04.657 } 00:23:04.657 ]' 00:23:04.657 18:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:04.657 18:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:04.657 18:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:04.657 18:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:04.657 18:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:04.657 18:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:04.657 18:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:04.657 18:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:04.916 18:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --hostid 0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-secret DHHC-1:02:MGYzNzg3ZjM5Zjc5Njk4M2I4MGE0YWExOGYxYTVkYmUxNDY4NGRjZWY0NmZkNDdlNqeI4Q==: --dhchap-ctrl-secret DHHC-1:01:NDg0ZjRmMjdlODliODE5MTM0MjliNGZiMzgwZDZkZDfpWGLp: 00:23:05.481 18:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:05.481 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:05.481 18:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:23:05.481 18:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:05.481 18:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:05.743 18:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:05.743 18:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:05.743 18:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:05.743 18:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:06.001 18:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:23:06.001 18:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:06.001 18:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:06.001 18:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:23:06.001 18:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:23:06.001 18:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:06.001 18:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-key key3 00:23:06.001 18:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:06.001 18:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:06.001 18:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:06.001 18:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:06.001 18:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:06.260 00:23:06.260 18:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:06.260 18:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:06.260 18:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:06.519 18:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:06.519 18:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:06.519 18:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:06.519 18:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:06.519 18:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:06.519 18:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:06.519 { 00:23:06.519 "auth": { 00:23:06.519 "dhgroup": "ffdhe6144", 00:23:06.519 "digest": "sha512", 00:23:06.519 "state": "completed" 00:23:06.519 }, 00:23:06.519 "cntlid": 135, 00:23:06.519 "listen_address": { 00:23:06.519 "adrfam": "IPv4", 00:23:06.519 "traddr": "10.0.0.2", 00:23:06.519 "trsvcid": "4420", 00:23:06.519 "trtype": "TCP" 00:23:06.519 }, 00:23:06.519 "peer_address": { 00:23:06.519 "adrfam": "IPv4", 00:23:06.519 "traddr": "10.0.0.1", 00:23:06.519 "trsvcid": "57774", 00:23:06.519 "trtype": "TCP" 00:23:06.519 }, 00:23:06.519 "qid": 0, 00:23:06.519 "state": "enabled", 00:23:06.519 "thread": "nvmf_tgt_poll_group_000" 00:23:06.519 } 00:23:06.519 ]' 00:23:06.519 18:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:06.777 18:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:06.777 18:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:06.777 18:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:06.777 18:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:06.777 18:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:06.777 18:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:06.777 18:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:07.035 18:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --hostid 0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-secret DHHC-1:03:MTFhOTg2OTBjNTdlY2QyZjZjYzcwZDQ1YWJhZDM4NTFiNDQxY2U1OWQzMDVmNWIxMmJlYjlmOTFmNGM3NTRlMB/K07c=: 00:23:07.602 18:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:07.602 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:07.602 18:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:23:07.602 18:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.602 18:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:07.602 18:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.602 18:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:23:07.602 18:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:07.602 18:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:07.602 18:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:07.860 18:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:23:07.860 18:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:07.860 18:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:07.860 18:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:23:07.860 18:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:23:07.860 18:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:07.860 18:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:07.860 18:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.860 18:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:08.118 18:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:08.118 18:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:08.118 18:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:08.685 00:23:08.685 18:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:08.685 18:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:08.685 18:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:08.943 18:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:08.943 18:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:08.943 18:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.943 18:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:08.943 18:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:08.943 18:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:08.943 { 00:23:08.943 "auth": { 00:23:08.943 "dhgroup": "ffdhe8192", 00:23:08.943 "digest": "sha512", 00:23:08.943 "state": "completed" 00:23:08.943 }, 00:23:08.943 "cntlid": 137, 00:23:08.943 "listen_address": { 00:23:08.943 "adrfam": "IPv4", 00:23:08.943 "traddr": "10.0.0.2", 00:23:08.943 "trsvcid": "4420", 00:23:08.943 "trtype": "TCP" 00:23:08.943 }, 00:23:08.943 "peer_address": { 00:23:08.943 "adrfam": "IPv4", 00:23:08.943 "traddr": "10.0.0.1", 00:23:08.943 "trsvcid": "57804", 00:23:08.943 "trtype": "TCP" 00:23:08.943 }, 00:23:08.943 "qid": 0, 00:23:08.943 "state": "enabled", 00:23:08.943 "thread": "nvmf_tgt_poll_group_000" 00:23:08.943 } 00:23:08.943 ]' 00:23:08.943 18:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:08.943 18:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:08.943 18:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:09.201 18:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:09.201 18:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:09.201 18:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:09.201 18:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:09.201 18:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:09.459 18:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --hostid 0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-secret DHHC-1:00:OTA0ZWZlYWE4NmVkZTI2N2EzZTU3NGNkOWMxZWJjNDA4ZWFjNmFlMTAwZDRhMzYy01x3lw==: --dhchap-ctrl-secret DHHC-1:03:OTBjMGE0YmU2NDdjZjYzMDU0NGU0MzE4Y2FjMTQwMWE2NTgyY2RlMGRlM2EwMWExMzQ0N2Y2N2M4N2YwZGNmNWRVl8M=: 00:23:10.394 18:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:10.394 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:10.394 18:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:23:10.394 18:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.394 18:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:10.394 18:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.394 18:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:10.394 18:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:10.394 18:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:10.394 18:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:23:10.394 18:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:10.394 18:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:10.394 18:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:23:10.394 18:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:23:10.394 18:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:10.394 18:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:10.394 18:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.394 18:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:10.652 18:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.652 18:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:10.652 18:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:11.218 00:23:11.218 18:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:11.218 18:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:11.218 18:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:11.476 18:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:11.476 18:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:11.476 18:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.476 18:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:11.476 18:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.476 18:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:11.476 { 00:23:11.476 "auth": { 00:23:11.476 "dhgroup": "ffdhe8192", 00:23:11.476 "digest": "sha512", 00:23:11.476 "state": "completed" 00:23:11.476 }, 00:23:11.476 "cntlid": 139, 00:23:11.476 "listen_address": { 00:23:11.476 "adrfam": "IPv4", 00:23:11.476 "traddr": "10.0.0.2", 00:23:11.476 "trsvcid": "4420", 00:23:11.476 "trtype": "TCP" 00:23:11.476 }, 00:23:11.476 "peer_address": { 00:23:11.476 "adrfam": "IPv4", 00:23:11.476 "traddr": "10.0.0.1", 00:23:11.476 "trsvcid": "57822", 00:23:11.476 "trtype": "TCP" 00:23:11.476 }, 00:23:11.476 "qid": 0, 00:23:11.476 "state": "enabled", 00:23:11.476 "thread": "nvmf_tgt_poll_group_000" 00:23:11.476 } 00:23:11.476 ]' 00:23:11.476 18:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:11.476 18:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:11.476 18:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:11.476 18:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:11.476 18:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:11.476 18:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:11.476 18:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:11.476 18:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:12.041 18:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --hostid 0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-secret DHHC-1:01:ZGE0NTk0NWEyNjIzOWU1ODE5NDA3MDQ0MDY0NGE4ZWZdneta: --dhchap-ctrl-secret DHHC-1:02:ZDNmNzVmNmE2ZjBkZWJmMmJiOWYwNWNlODUxMDVmMGFlMDc1NmQwOWRlM2M0ZjE5utXpNg==: 00:23:12.624 18:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:12.624 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:12.624 18:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:23:12.624 18:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.624 18:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:12.624 18:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.624 18:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:12.624 18:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:12.624 18:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:12.882 18:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:23:12.882 18:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:12.882 18:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:12.882 18:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:23:12.882 18:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:23:12.882 18:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:12.882 18:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:12.882 18:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.882 18:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:12.882 18:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.882 18:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:12.882 18:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:13.817 00:23:13.817 18:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:13.817 18:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:13.817 18:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:14.075 18:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:14.075 18:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:14.075 18:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.075 18:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:14.075 18:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.075 18:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:14.075 { 00:23:14.075 "auth": { 00:23:14.075 "dhgroup": "ffdhe8192", 00:23:14.075 "digest": "sha512", 00:23:14.075 "state": "completed" 00:23:14.075 }, 00:23:14.075 "cntlid": 141, 00:23:14.075 "listen_address": { 00:23:14.075 "adrfam": "IPv4", 00:23:14.075 "traddr": "10.0.0.2", 00:23:14.075 "trsvcid": "4420", 00:23:14.075 "trtype": "TCP" 00:23:14.075 }, 00:23:14.075 "peer_address": { 00:23:14.075 "adrfam": "IPv4", 00:23:14.075 "traddr": "10.0.0.1", 00:23:14.075 "trsvcid": "36914", 00:23:14.075 "trtype": "TCP" 00:23:14.075 }, 00:23:14.075 "qid": 0, 00:23:14.075 "state": "enabled", 00:23:14.075 "thread": "nvmf_tgt_poll_group_000" 00:23:14.075 } 00:23:14.075 ]' 00:23:14.075 18:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:14.075 18:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:14.075 18:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:14.075 18:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:14.075 18:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:14.075 18:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:14.075 18:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:14.075 18:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:14.642 18:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --hostid 0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-secret DHHC-1:02:MGYzNzg3ZjM5Zjc5Njk4M2I4MGE0YWExOGYxYTVkYmUxNDY4NGRjZWY0NmZkNDdlNqeI4Q==: --dhchap-ctrl-secret DHHC-1:01:NDg0ZjRmMjdlODliODE5MTM0MjliNGZiMzgwZDZkZDfpWGLp: 00:23:15.209 18:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:15.209 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:15.209 18:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:23:15.209 18:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.209 18:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:15.209 18:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.209 18:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:15.209 18:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:15.209 18:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:15.467 18:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:23:15.467 18:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:15.467 18:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:15.467 18:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:23:15.467 18:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:23:15.467 18:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:15.467 18:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-key key3 00:23:15.467 18:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.467 18:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:15.468 18:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.468 18:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:15.468 18:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:16.037 00:23:16.037 18:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:16.037 18:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:16.037 18:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:16.299 18:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:16.557 18:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:16.557 18:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.557 18:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:16.557 18:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.557 18:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:16.557 { 00:23:16.557 "auth": { 00:23:16.557 "dhgroup": "ffdhe8192", 00:23:16.557 "digest": "sha512", 00:23:16.557 "state": "completed" 00:23:16.557 }, 00:23:16.557 "cntlid": 143, 00:23:16.557 "listen_address": { 00:23:16.557 "adrfam": "IPv4", 00:23:16.557 "traddr": "10.0.0.2", 00:23:16.557 "trsvcid": "4420", 00:23:16.557 "trtype": "TCP" 00:23:16.557 }, 00:23:16.557 "peer_address": { 00:23:16.557 "adrfam": "IPv4", 00:23:16.557 "traddr": "10.0.0.1", 00:23:16.557 "trsvcid": "36942", 00:23:16.557 "trtype": "TCP" 00:23:16.557 }, 00:23:16.557 "qid": 0, 00:23:16.557 "state": "enabled", 00:23:16.557 "thread": "nvmf_tgt_poll_group_000" 00:23:16.557 } 00:23:16.557 ]' 00:23:16.557 18:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:16.557 18:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:16.557 18:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:16.557 18:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:16.557 18:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:16.557 18:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:16.557 18:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:16.557 18:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:16.815 18:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --hostid 0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-secret DHHC-1:03:MTFhOTg2OTBjNTdlY2QyZjZjYzcwZDQ1YWJhZDM4NTFiNDQxY2U1OWQzMDVmNWIxMmJlYjlmOTFmNGM3NTRlMB/K07c=: 00:23:17.747 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:17.747 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:17.747 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:23:17.747 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.747 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:17.747 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.747 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:23:17.747 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:23:17.747 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:23:17.747 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:17.747 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:17.747 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:18.005 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:23:18.005 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:18.005 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:18.005 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:23:18.005 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:23:18.005 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:18.005 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:18.005 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.005 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:18.005 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.005 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:18.005 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:18.571 00:23:18.571 18:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:18.571 18:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:18.571 18:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:18.829 18:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:18.829 18:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:18.829 18:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.829 18:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:18.829 18:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.829 18:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:18.829 { 00:23:18.829 "auth": { 00:23:18.829 "dhgroup": "ffdhe8192", 00:23:18.829 "digest": "sha512", 00:23:18.829 "state": "completed" 00:23:18.829 }, 00:23:18.829 "cntlid": 145, 00:23:18.829 "listen_address": { 00:23:18.829 "adrfam": "IPv4", 00:23:18.829 "traddr": "10.0.0.2", 00:23:18.829 "trsvcid": "4420", 00:23:18.829 "trtype": "TCP" 00:23:18.829 }, 00:23:18.829 "peer_address": { 00:23:18.829 "adrfam": "IPv4", 00:23:18.829 "traddr": "10.0.0.1", 00:23:18.829 "trsvcid": "36952", 00:23:18.829 "trtype": "TCP" 00:23:18.829 }, 00:23:18.829 "qid": 0, 00:23:18.829 "state": "enabled", 00:23:18.829 "thread": "nvmf_tgt_poll_group_000" 00:23:18.829 } 00:23:18.829 ]' 00:23:18.829 18:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:18.829 18:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:18.829 18:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:19.086 18:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:19.086 18:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:19.086 18:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:19.086 18:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:19.086 18:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:19.344 18:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --hostid 0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-secret DHHC-1:00:OTA0ZWZlYWE4NmVkZTI2N2EzZTU3NGNkOWMxZWJjNDA4ZWFjNmFlMTAwZDRhMzYy01x3lw==: --dhchap-ctrl-secret DHHC-1:03:OTBjMGE0YmU2NDdjZjYzMDU0NGU0MzE4Y2FjMTQwMWE2NTgyY2RlMGRlM2EwMWExMzQ0N2Y2N2M4N2YwZGNmNWRVl8M=: 00:23:20.277 18:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:20.277 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:20.277 18:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:23:20.277 18:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.277 18:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:20.277 18:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.277 18:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-key key1 00:23:20.277 18:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.277 18:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:20.277 18:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.277 18:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:23:20.277 18:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:23:20.277 18:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:23:20.278 18:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:23:20.278 18:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:20.278 18:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:23:20.278 18:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:20.278 18:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:23:20.278 18:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:23:20.536 2024/07/22 18:30:32 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_key:key2 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:23:20.536 request: 00:23:20.536 { 00:23:20.536 "method": "bdev_nvme_attach_controller", 00:23:20.536 "params": { 00:23:20.536 "name": "nvme0", 00:23:20.536 "trtype": "tcp", 00:23:20.536 "traddr": "10.0.0.2", 00:23:20.536 "adrfam": "ipv4", 00:23:20.536 "trsvcid": "4420", 00:23:20.536 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:20.536 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da", 00:23:20.536 "prchk_reftag": false, 00:23:20.536 "prchk_guard": false, 00:23:20.536 "hdgst": false, 00:23:20.536 "ddgst": false, 00:23:20.536 "dhchap_key": "key2" 00:23:20.536 } 00:23:20.536 } 00:23:20.536 Got JSON-RPC error response 00:23:20.536 GoRPCClient: error on JSON-RPC call 00:23:20.536 18:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:23:20.536 18:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:20.536 18:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:20.536 18:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:20.536 18:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:23:20.536 18:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.536 18:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:20.536 18:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.536 18:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:20.536 18:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.536 18:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:20.536 18:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.536 18:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:20.536 18:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:23:20.846 18:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:20.846 18:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:23:20.846 18:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:20.846 18:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:23:20.846 18:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:20.846 18:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:20.846 18:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:21.103 2024/07/22 18:30:33 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey2 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:23:21.103 request: 00:23:21.103 { 00:23:21.103 "method": "bdev_nvme_attach_controller", 00:23:21.103 "params": { 00:23:21.103 "name": "nvme0", 00:23:21.103 "trtype": "tcp", 00:23:21.103 "traddr": "10.0.0.2", 00:23:21.103 "adrfam": "ipv4", 00:23:21.103 "trsvcid": "4420", 00:23:21.103 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:21.103 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da", 00:23:21.103 "prchk_reftag": false, 00:23:21.103 "prchk_guard": false, 00:23:21.103 "hdgst": false, 00:23:21.103 "ddgst": false, 00:23:21.103 "dhchap_key": "key1", 00:23:21.104 "dhchap_ctrlr_key": "ckey2" 00:23:21.104 } 00:23:21.104 } 00:23:21.104 Got JSON-RPC error response 00:23:21.104 GoRPCClient: error on JSON-RPC call 00:23:21.362 18:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:23:21.362 18:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:21.362 18:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:21.362 18:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:21.362 18:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:23:21.362 18:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.362 18:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:21.362 18:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.362 18:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-key key1 00:23:21.362 18:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.362 18:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:21.362 18:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.362 18:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:21.362 18:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:23:21.362 18:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:21.362 18:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:23:21.362 18:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:21.362 18:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:23:21.362 18:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:21.362 18:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:21.362 18:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:21.928 2024/07/22 18:30:33 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey1 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:23:21.928 request: 00:23:21.928 { 00:23:21.928 "method": "bdev_nvme_attach_controller", 00:23:21.928 "params": { 00:23:21.928 "name": "nvme0", 00:23:21.928 "trtype": "tcp", 00:23:21.928 "traddr": "10.0.0.2", 00:23:21.928 "adrfam": "ipv4", 00:23:21.928 "trsvcid": "4420", 00:23:21.928 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:21.928 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da", 00:23:21.928 "prchk_reftag": false, 00:23:21.928 "prchk_guard": false, 00:23:21.928 "hdgst": false, 00:23:21.928 "ddgst": false, 00:23:21.928 "dhchap_key": "key1", 00:23:21.928 "dhchap_ctrlr_key": "ckey1" 00:23:21.928 } 00:23:21.928 } 00:23:21.928 Got JSON-RPC error response 00:23:21.928 GoRPCClient: error on JSON-RPC call 00:23:21.928 18:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:23:21.928 18:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:21.928 18:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:21.928 18:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:21.928 18:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:23:21.928 18:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.928 18:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:21.928 18:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.928 18:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 85479 00:23:21.928 18:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 85479 ']' 00:23:21.928 18:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 85479 00:23:21.928 18:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:23:21.928 18:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:21.928 18:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85479 00:23:21.929 18:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:21.929 18:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:21.929 killing process with pid 85479 00:23:21.929 18:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85479' 00:23:21.929 18:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 85479 00:23:21.929 18:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 85479 00:23:23.305 18:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:23:23.305 18:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:23.305 18:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:23.305 18:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:23.305 18:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=90362 00:23:23.305 18:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 90362 00:23:23.305 18:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:23:23.305 18:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 90362 ']' 00:23:23.305 18:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:23.305 18:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:23.305 18:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:23.305 18:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:23.305 18:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:24.269 18:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:24.269 18:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:23:24.269 18:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:24.269 18:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:24.269 18:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:24.269 18:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:24.269 18:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:23:24.270 18:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 90362 00:23:24.270 18:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 90362 ']' 00:23:24.270 18:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:24.270 18:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:24.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:24.270 18:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:24.270 18:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:24.270 18:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:24.527 18:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:24.527 18:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:23:24.527 18:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:23:24.527 18:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.527 18:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:25.094 18:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.094 18:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:23:25.094 18:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:25.094 18:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:25.094 18:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:23:25.094 18:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:23:25.094 18:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:25.094 18:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-key key3 00:23:25.094 18:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.094 18:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:25.094 18:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.094 18:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:25.094 18:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:25.662 00:23:25.662 18:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:25.662 18:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:25.662 18:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:25.920 18:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:25.920 18:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:25.920 18:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.920 18:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:25.920 18:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.920 18:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:25.920 { 00:23:25.920 "auth": { 00:23:25.920 "dhgroup": "ffdhe8192", 00:23:25.920 "digest": "sha512", 00:23:25.920 "state": "completed" 00:23:25.920 }, 00:23:25.920 "cntlid": 1, 00:23:25.920 "listen_address": { 00:23:25.920 "adrfam": "IPv4", 00:23:25.920 "traddr": "10.0.0.2", 00:23:25.920 "trsvcid": "4420", 00:23:25.920 "trtype": "TCP" 00:23:25.920 }, 00:23:25.920 "peer_address": { 00:23:25.920 "adrfam": "IPv4", 00:23:25.920 "traddr": "10.0.0.1", 00:23:25.920 "trsvcid": "36680", 00:23:25.920 "trtype": "TCP" 00:23:25.920 }, 00:23:25.920 "qid": 0, 00:23:25.920 "state": "enabled", 00:23:25.920 "thread": "nvmf_tgt_poll_group_000" 00:23:25.920 } 00:23:25.920 ]' 00:23:25.920 18:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:25.920 18:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:25.920 18:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:26.179 18:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:26.179 18:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:26.179 18:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:26.179 18:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:26.179 18:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:26.437 18:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --hostid 0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-secret DHHC-1:03:MTFhOTg2OTBjNTdlY2QyZjZjYzcwZDQ1YWJhZDM4NTFiNDQxY2U1OWQzMDVmNWIxMmJlYjlmOTFmNGM3NTRlMB/K07c=: 00:23:27.003 18:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:27.003 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:27.003 18:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:23:27.004 18:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.004 18:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:27.262 18:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.262 18:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --dhchap-key key3 00:23:27.262 18:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.262 18:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:27.262 18:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.262 18:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:23:27.262 18:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:23:27.521 18:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:27.521 18:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:23:27.521 18:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:27.521 18:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:23:27.521 18:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:27.521 18:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:23:27.521 18:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:27.521 18:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:27.521 18:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:27.780 2024/07/22 18:30:39 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_key:key3 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:23:27.780 request: 00:23:27.780 { 00:23:27.780 "method": "bdev_nvme_attach_controller", 00:23:27.780 "params": { 00:23:27.780 "name": "nvme0", 00:23:27.780 "trtype": "tcp", 00:23:27.780 "traddr": "10.0.0.2", 00:23:27.780 "adrfam": "ipv4", 00:23:27.780 "trsvcid": "4420", 00:23:27.780 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:27.780 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da", 00:23:27.780 "prchk_reftag": false, 00:23:27.780 "prchk_guard": false, 00:23:27.780 "hdgst": false, 00:23:27.780 "ddgst": false, 00:23:27.780 "dhchap_key": "key3" 00:23:27.780 } 00:23:27.780 } 00:23:27.780 Got JSON-RPC error response 00:23:27.780 GoRPCClient: error on JSON-RPC call 00:23:27.780 18:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:23:27.780 18:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:27.780 18:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:27.780 18:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:27.780 18:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:23:27.780 18:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:23:27.780 18:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:23:27.780 18:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:23:28.038 18:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:28.038 18:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:23:28.038 18:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:28.038 18:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:23:28.038 18:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:28.038 18:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:23:28.038 18:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:28.038 18:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:28.038 18:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:28.295 2024/07/22 18:30:40 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_key:key3 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:23:28.295 request: 00:23:28.295 { 00:23:28.295 "method": "bdev_nvme_attach_controller", 00:23:28.295 "params": { 00:23:28.295 "name": "nvme0", 00:23:28.295 "trtype": "tcp", 00:23:28.295 "traddr": "10.0.0.2", 00:23:28.295 "adrfam": "ipv4", 00:23:28.295 "trsvcid": "4420", 00:23:28.295 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:28.295 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da", 00:23:28.295 "prchk_reftag": false, 00:23:28.295 "prchk_guard": false, 00:23:28.295 "hdgst": false, 00:23:28.295 "ddgst": false, 00:23:28.295 "dhchap_key": "key3" 00:23:28.295 } 00:23:28.295 } 00:23:28.295 Got JSON-RPC error response 00:23:28.295 GoRPCClient: error on JSON-RPC call 00:23:28.295 18:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:23:28.295 18:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:28.295 18:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:28.295 18:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:28.295 18:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:23:28.295 18:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:23:28.295 18:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:23:28.295 18:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:28.295 18:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:28.295 18:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:28.565 18:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:23:28.565 18:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.565 18:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:28.565 18:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.565 18:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:23:28.565 18:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.565 18:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:28.565 18:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.565 18:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:28.565 18:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:23:28.565 18:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:28.565 18:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:23:28.565 18:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:28.565 18:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:23:28.565 18:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:28.565 18:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:28.565 18:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:28.822 2024/07/22 18:30:40 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_ctrlr_key:key1 dhchap_key:key0 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:23:28.822 request: 00:23:28.822 { 00:23:28.822 "method": "bdev_nvme_attach_controller", 00:23:28.822 "params": { 00:23:28.822 "name": "nvme0", 00:23:28.822 "trtype": "tcp", 00:23:28.822 "traddr": "10.0.0.2", 00:23:28.822 "adrfam": "ipv4", 00:23:28.822 "trsvcid": "4420", 00:23:28.822 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:28.822 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da", 00:23:28.822 "prchk_reftag": false, 00:23:28.822 "prchk_guard": false, 00:23:28.822 "hdgst": false, 00:23:28.822 "ddgst": false, 00:23:28.822 "dhchap_key": "key0", 00:23:28.822 "dhchap_ctrlr_key": "key1" 00:23:28.822 } 00:23:28.822 } 00:23:28.822 Got JSON-RPC error response 00:23:28.822 GoRPCClient: error on JSON-RPC call 00:23:28.822 18:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:23:28.822 18:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:28.822 18:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:28.822 18:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:28.822 18:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:23:28.822 18:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:23:29.081 00:23:29.081 18:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:23:29.081 18:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:29.081 18:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:23:29.339 18:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:29.339 18:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:29.339 18:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:29.598 18:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:23:29.598 18:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:23:29.598 18:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 85523 00:23:29.598 18:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 85523 ']' 00:23:29.598 18:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 85523 00:23:29.598 18:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:23:29.598 18:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:29.598 18:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85523 00:23:29.598 18:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:29.598 18:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:29.598 killing process with pid 85523 00:23:29.598 18:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85523' 00:23:29.598 18:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 85523 00:23:29.598 18:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 85523 00:23:32.126 18:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:23:32.126 18:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:32.126 18:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:23:32.126 18:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:32.126 18:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:23:32.126 18:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:32.126 18:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:32.126 rmmod nvme_tcp 00:23:32.126 rmmod nvme_fabrics 00:23:32.126 rmmod nvme_keyring 00:23:32.126 18:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:32.126 18:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:23:32.126 18:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:23:32.126 18:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 90362 ']' 00:23:32.126 18:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 90362 00:23:32.126 18:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 90362 ']' 00:23:32.126 18:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 90362 00:23:32.126 18:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:23:32.126 18:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:32.126 18:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 90362 00:23:32.126 18:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:32.126 killing process with pid 90362 00:23:32.126 18:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:32.126 18:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 90362' 00:23:32.126 18:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 90362 00:23:32.126 18:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 90362 00:23:33.500 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:33.500 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:33.500 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:33.500 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:33.500 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:33.500 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:33.500 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:33.500 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:33.500 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:23:33.500 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.AxW /tmp/spdk.key-sha256.sBj /tmp/spdk.key-sha384.ZBq /tmp/spdk.key-sha512.fNv /tmp/spdk.key-sha512.1lW /tmp/spdk.key-sha384.CU3 /tmp/spdk.key-sha256.xu6 '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:23:33.500 00:23:33.500 real 3m1.261s 00:23:33.500 user 7m16.548s 00:23:33.500 sys 0m23.056s 00:23:33.500 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:33.500 ************************************ 00:23:33.500 END TEST nvmf_auth_target 00:23:33.500 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:33.500 ************************************ 00:23:33.758 18:30:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:23:33.758 18:30:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:23:33.758 18:30:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:23:33.758 18:30:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:23:33.758 18:30:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:33.758 18:30:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:33.758 ************************************ 00:23:33.758 START TEST nvmf_bdevio_no_huge 00:23:33.758 ************************************ 00:23:33.758 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:23:33.758 * Looking for test storage... 00:23:33.758 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:23:33.758 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:33.758 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:23:33.758 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:33.758 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:33.758 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:33.759 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:33.759 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:33.759 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:33.759 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:33.759 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:33.759 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:33.759 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:33.759 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:23:33.759 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=0b8484e2-e129-4a11-8748-0b3c728771da 00:23:33.759 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:33.759 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:33.759 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:33.759 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:33.759 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:33.759 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:33.759 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:33.759 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:33.759 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.759 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.759 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.759 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:23:33.759 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.759 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:23:33.759 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:33.759 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:33.759 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:33.759 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:33.759 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:33.759 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:33.759 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:33.759 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:33.759 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:33.759 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:33.759 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:23:33.759 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:33.759 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:33.759 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:33.759 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:33.759 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:33.759 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:33.759 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:33.759 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:33.759 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:23:33.759 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:23:33.759 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:23:33.759 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:23:33.759 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:23:33.759 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # nvmf_veth_init 00:23:33.759 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:33.759 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:33.759 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:33.759 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:23:33.759 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:33.759 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:33.759 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:33.759 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:33.759 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:33.759 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:33.759 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:33.759 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:33.759 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:23:33.759 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:23:33.759 Cannot find device "nvmf_tgt_br" 00:23:33.759 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # true 00:23:33.759 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:23:33.759 Cannot find device "nvmf_tgt_br2" 00:23:33.759 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # true 00:23:33.759 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:23:33.759 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:23:33.759 Cannot find device "nvmf_tgt_br" 00:23:33.759 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # true 00:23:33.759 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:23:33.759 Cannot find device "nvmf_tgt_br2" 00:23:33.759 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # true 00:23:33.759 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:23:33.759 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:23:34.017 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:34.017 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:34.017 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:23:34.017 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:34.017 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:34.017 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:23:34.017 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:23:34.017 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:34.017 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:34.018 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:34.018 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:34.018 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:34.018 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:34.018 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:34.018 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:34.018 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:23:34.018 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:23:34.018 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:23:34.018 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:23:34.018 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:34.018 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:34.018 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:34.018 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:23:34.018 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:23:34.018 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:23:34.018 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:34.018 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:34.018 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:34.018 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:34.018 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:23:34.018 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:34.018 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.103 ms 00:23:34.018 00:23:34.018 --- 10.0.0.2 ping statistics --- 00:23:34.018 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:34.018 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:23:34.018 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:23:34.018 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:34.018 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:23:34.018 00:23:34.018 --- 10.0.0.3 ping statistics --- 00:23:34.018 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:34.018 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:23:34.018 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:34.018 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:34.018 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:23:34.018 00:23:34.018 --- 10.0.0.1 ping statistics --- 00:23:34.018 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:34.018 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:23:34.018 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:34.018 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@433 -- # return 0 00:23:34.018 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:34.018 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:34.018 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:34.018 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:34.018 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:34.018 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:34.018 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:34.018 18:30:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:23:34.018 18:30:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:34.018 18:30:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:34.018 18:30:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:34.018 18:30:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=90809 00:23:34.018 18:30:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:23:34.018 18:30:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 90809 00:23:34.018 18:30:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 90809 ']' 00:23:34.018 18:30:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:34.018 18:30:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:34.018 18:30:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:34.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:34.018 18:30:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:34.018 18:30:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:34.276 [2024-07-22 18:30:46.157438] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:23:34.276 [2024-07-22 18:30:46.157690] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:23:34.533 [2024-07-22 18:30:46.380772] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:34.792 [2024-07-22 18:30:46.644085] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:34.792 [2024-07-22 18:30:46.644148] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:34.792 [2024-07-22 18:30:46.644168] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:34.792 [2024-07-22 18:30:46.644184] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:34.792 [2024-07-22 18:30:46.644202] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:34.792 [2024-07-22 18:30:46.644464] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:23:34.792 [2024-07-22 18:30:46.646703] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:23:34.792 [2024-07-22 18:30:46.646886] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:34.792 [2024-07-22 18:30:46.646902] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:23:35.359 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:35.359 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:23:35.359 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:35.359 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:35.359 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:35.359 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:35.359 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:35.359 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.359 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:35.359 [2024-07-22 18:30:47.174616] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:35.359 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.359 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:35.359 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.359 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:35.359 Malloc0 00:23:35.359 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.359 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:35.359 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.359 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:35.359 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.359 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:35.359 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.359 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:35.359 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.359 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:35.359 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.359 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:35.359 [2024-07-22 18:30:47.271029] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:35.359 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.359 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:23:35.359 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:23:35.359 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:23:35.359 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:23:35.359 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:35.359 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:35.359 { 00:23:35.359 "params": { 00:23:35.359 "name": "Nvme$subsystem", 00:23:35.359 "trtype": "$TEST_TRANSPORT", 00:23:35.359 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:35.359 "adrfam": "ipv4", 00:23:35.359 "trsvcid": "$NVMF_PORT", 00:23:35.359 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:35.359 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:35.359 "hdgst": ${hdgst:-false}, 00:23:35.359 "ddgst": ${ddgst:-false} 00:23:35.359 }, 00:23:35.359 "method": "bdev_nvme_attach_controller" 00:23:35.359 } 00:23:35.359 EOF 00:23:35.359 )") 00:23:35.359 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:23:35.359 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:23:35.359 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:23:35.359 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:35.359 "params": { 00:23:35.359 "name": "Nvme1", 00:23:35.359 "trtype": "tcp", 00:23:35.359 "traddr": "10.0.0.2", 00:23:35.359 "adrfam": "ipv4", 00:23:35.359 "trsvcid": "4420", 00:23:35.359 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:35.359 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:35.359 "hdgst": false, 00:23:35.359 "ddgst": false 00:23:35.359 }, 00:23:35.359 "method": "bdev_nvme_attach_controller" 00:23:35.359 }' 00:23:35.618 [2024-07-22 18:30:47.400335] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:23:35.618 [2024-07-22 18:30:47.400557] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid90863 ] 00:23:35.618 [2024-07-22 18:30:47.612681] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:35.875 [2024-07-22 18:30:47.875298] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:35.875 [2024-07-22 18:30:47.875385] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:35.875 [2024-07-22 18:30:47.875371] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:36.442 I/O targets: 00:23:36.442 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:23:36.442 00:23:36.442 00:23:36.442 CUnit - A unit testing framework for C - Version 2.1-3 00:23:36.442 http://cunit.sourceforge.net/ 00:23:36.442 00:23:36.442 00:23:36.442 Suite: bdevio tests on: Nvme1n1 00:23:36.442 Test: blockdev write read block ...passed 00:23:36.442 Test: blockdev write zeroes read block ...passed 00:23:36.442 Test: blockdev write zeroes read no split ...passed 00:23:36.442 Test: blockdev write zeroes read split ...passed 00:23:36.442 Test: blockdev write zeroes read split partial ...passed 00:23:36.442 Test: blockdev reset ...[2024-07-22 18:30:48.435791] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:36.442 [2024-07-22 18:30:48.436023] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000029c00 (9): Bad file descriptor 00:23:36.442 [2024-07-22 18:30:48.448345] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:36.442 passed 00:23:36.442 Test: blockdev write read 8 blocks ...passed 00:23:36.442 Test: blockdev write read size > 128k ...passed 00:23:36.442 Test: blockdev write read invalid size ...passed 00:23:36.705 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:23:36.705 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:23:36.705 Test: blockdev write read max offset ...passed 00:23:36.705 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:23:36.705 Test: blockdev writev readv 8 blocks ...passed 00:23:36.705 Test: blockdev writev readv 30 x 1block ...passed 00:23:36.705 Test: blockdev writev readv block ...passed 00:23:36.705 Test: blockdev writev readv size > 128k ...passed 00:23:36.705 Test: blockdev writev readv size > 128k in two iovs ...passed 00:23:36.705 Test: blockdev comparev and writev ...[2024-07-22 18:30:48.629358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:36.705 [2024-07-22 18:30:48.629445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:36.705 [2024-07-22 18:30:48.629477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:36.705 [2024-07-22 18:30:48.629501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:36.705 [2024-07-22 18:30:48.630235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:36.705 [2024-07-22 18:30:48.630276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:36.705 [2024-07-22 18:30:48.630305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:36.705 [2024-07-22 18:30:48.630331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:36.705 [2024-07-22 18:30:48.630831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:36.705 [2024-07-22 18:30:48.630893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:36.705 [2024-07-22 18:30:48.630920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:36.705 [2024-07-22 18:30:48.630936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:36.705 [2024-07-22 18:30:48.631364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:36.705 [2024-07-22 18:30:48.631401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:36.705 [2024-07-22 18:30:48.631428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:36.705 [2024-07-22 18:30:48.631444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:36.705 passed 00:23:36.705 Test: blockdev nvme passthru rw ...passed 00:23:36.705 Test: blockdev nvme passthru vendor specific ...[2024-07-22 18:30:48.717793] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:36.705 [2024-07-22 18:30:48.717892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:36.705 [2024-07-22 18:30:48.718326] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:36.705 [2024-07-22 18:30:48.718435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:36.705 [2024-07-22 18:30:48.718744] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:36.705 [2024-07-22 18:30:48.718781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:36.705 [2024-07-22 18:30:48.719121] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:36.705 [2024-07-22 18:30:48.719159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:36.705 passed 00:23:36.970 Test: blockdev nvme admin passthru ...passed 00:23:36.970 Test: blockdev copy ...passed 00:23:36.970 00:23:36.970 Run Summary: Type Total Ran Passed Failed Inactive 00:23:36.971 suites 1 1 n/a 0 0 00:23:36.971 tests 23 23 23 0 0 00:23:36.971 asserts 152 152 152 0 n/a 00:23:36.971 00:23:36.971 Elapsed time = 1.031 seconds 00:23:37.906 18:30:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:37.906 18:30:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.906 18:30:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:37.906 18:30:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.906 18:30:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:23:37.906 18:30:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:23:37.906 18:30:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:37.906 18:30:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:23:37.906 18:30:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:37.906 18:30:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:23:37.906 18:30:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:37.906 18:30:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:37.906 rmmod nvme_tcp 00:23:37.906 rmmod nvme_fabrics 00:23:37.906 rmmod nvme_keyring 00:23:37.906 18:30:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:37.906 18:30:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:23:37.906 18:30:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:23:37.906 18:30:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 90809 ']' 00:23:37.906 18:30:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 90809 00:23:37.906 18:30:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 90809 ']' 00:23:37.906 18:30:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 90809 00:23:37.906 18:30:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:23:37.906 18:30:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:37.906 18:30:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 90809 00:23:37.906 18:30:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:23:37.906 18:30:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:23:37.906 killing process with pid 90809 00:23:37.906 18:30:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 90809' 00:23:37.906 18:30:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 90809 00:23:37.906 18:30:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 90809 00:23:38.840 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:38.841 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:38.841 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:38.841 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:38.841 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:38.841 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:38.841 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:38.841 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:38.841 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:23:38.841 00:23:38.841 real 0m5.191s 00:23:38.841 user 0m19.115s 00:23:38.841 sys 0m1.728s 00:23:38.841 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:38.841 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:38.841 ************************************ 00:23:38.841 END TEST nvmf_bdevio_no_huge 00:23:38.841 ************************************ 00:23:38.841 18:30:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:23:38.841 18:30:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:23:38.841 18:30:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:38.841 18:30:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:38.841 18:30:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:38.841 ************************************ 00:23:38.841 START TEST nvmf_tls 00:23:38.841 ************************************ 00:23:38.841 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:23:38.841 * Looking for test storage... 00:23:39.100 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:23:39.100 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:39.100 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:23:39.100 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:39.100 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:39.100 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:39.100 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:39.100 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:39.100 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:39.100 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:39.100 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:39.100 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:39.100 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:39.100 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:23:39.100 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=0b8484e2-e129-4a11-8748-0b3c728771da 00:23:39.100 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:39.100 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:39.100 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:39.100 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:39.100 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:39.100 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:39.100 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:39.100 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:39.100 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.100 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.100 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.100 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:23:39.100 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.100 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:23:39.100 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:39.100 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:39.100 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:39.100 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:39.100 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:39.100 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:39.100 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:39.100 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:39.100 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:39.100 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:23:39.101 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:39.101 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:39.101 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:39.101 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:39.101 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:39.101 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:39.101 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:39.101 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:39.101 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:23:39.101 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:23:39.101 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:23:39.101 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:23:39.101 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:23:39.101 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # nvmf_veth_init 00:23:39.101 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:39.101 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:39.101 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:39.101 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:23:39.101 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:39.101 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:39.101 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:39.101 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:39.101 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:39.101 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:39.101 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:39.101 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:39.101 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:23:39.101 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:23:39.101 Cannot find device "nvmf_tgt_br" 00:23:39.101 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@155 -- # true 00:23:39.101 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:23:39.101 Cannot find device "nvmf_tgt_br2" 00:23:39.101 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # true 00:23:39.101 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:23:39.101 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:23:39.101 Cannot find device "nvmf_tgt_br" 00:23:39.101 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@158 -- # true 00:23:39.101 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:23:39.101 Cannot find device "nvmf_tgt_br2" 00:23:39.101 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # true 00:23:39.101 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:23:39.101 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:23:39.101 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:39.101 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:39.101 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # true 00:23:39.101 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:39.101 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:39.101 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # true 00:23:39.101 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:23:39.101 18:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:39.101 18:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:39.101 18:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:39.101 18:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:39.101 18:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:39.101 18:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:39.101 18:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:39.101 18:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:39.101 18:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:23:39.101 18:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:23:39.101 18:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:23:39.101 18:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:23:39.101 18:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:39.101 18:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:39.101 18:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:39.101 18:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:23:39.101 18:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:23:39.101 18:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:23:39.101 18:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:39.360 18:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:39.360 18:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:39.360 18:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:39.360 18:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:23:39.360 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:39.360 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:23:39.360 00:23:39.360 --- 10.0.0.2 ping statistics --- 00:23:39.360 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:39.360 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:23:39.360 18:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:23:39.360 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:39.360 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:23:39.360 00:23:39.360 --- 10.0.0.3 ping statistics --- 00:23:39.360 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:39.360 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:23:39.360 18:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:39.360 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:39.360 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:23:39.360 00:23:39.360 --- 10.0.0.1 ping statistics --- 00:23:39.360 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:39.360 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:23:39.360 18:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:39.360 18:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@433 -- # return 0 00:23:39.360 18:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:39.360 18:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:39.360 18:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:39.360 18:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:39.360 18:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:39.360 18:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:39.360 18:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:39.360 18:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:23:39.360 18:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:39.360 18:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:39.360 18:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:39.360 18:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=91091 00:23:39.360 18:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 91091 00:23:39.360 18:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 91091 ']' 00:23:39.360 18:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:23:39.360 18:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:39.360 18:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:39.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:39.360 18:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:39.360 18:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:39.360 18:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:39.360 [2024-07-22 18:30:51.313367] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:23:39.360 [2024-07-22 18:30:51.313609] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:39.618 [2024-07-22 18:30:51.500414] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:39.877 [2024-07-22 18:30:51.807927] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:39.877 [2024-07-22 18:30:51.808004] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:39.877 [2024-07-22 18:30:51.808024] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:39.877 [2024-07-22 18:30:51.808041] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:39.877 [2024-07-22 18:30:51.808054] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:39.877 [2024-07-22 18:30:51.808121] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:40.444 18:30:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:40.444 18:30:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:40.444 18:30:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:40.444 18:30:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:40.444 18:30:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:40.444 18:30:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:40.444 18:30:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:23:40.444 18:30:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:23:40.444 true 00:23:40.444 18:30:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:23:40.444 18:30:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:41.011 18:30:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # version=0 00:23:41.011 18:30:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:23:41.011 18:30:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:23:41.269 18:30:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:23:41.269 18:30:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:41.527 18:30:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # version=13 00:23:41.527 18:30:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:23:41.527 18:30:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:23:41.785 18:30:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:41.785 18:30:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:23:42.044 18:30:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # version=7 00:23:42.044 18:30:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:23:42.044 18:30:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:42.044 18:30:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:23:42.302 18:30:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:23:42.302 18:30:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:23:42.302 18:30:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:23:42.560 18:30:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:23:42.560 18:30:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:42.818 18:30:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:23:42.818 18:30:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:23:42.818 18:30:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:23:43.076 18:30:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:23:43.076 18:30:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:43.335 18:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:23:43.335 18:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:23:43.335 18:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:23:43.335 18:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:23:43.335 18:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:23:43.335 18:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:23:43.335 18:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:23:43.335 18:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:23:43.335 18:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:23:43.335 18:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:43.335 18:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:23:43.335 18:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:23:43.335 18:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:23:43.335 18:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:23:43.335 18:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:23:43.335 18:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:23:43.335 18:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:23:43.335 18:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:23:43.335 18:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:23:43.335 18:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.yhXg0mJ7O5 00:23:43.335 18:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:23:43.335 18:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.7KQUfF6asL 00:23:43.335 18:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:43.335 18:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:23:43.335 18:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.yhXg0mJ7O5 00:23:43.335 18:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.7KQUfF6asL 00:23:43.335 18:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@130 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:23:43.654 18:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:23:44.222 18:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.yhXg0mJ7O5 00:23:44.222 18:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.yhXg0mJ7O5 00:23:44.222 18:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:44.480 [2024-07-22 18:30:56.279103] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:44.480 18:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:44.739 18:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:44.997 [2024-07-22 18:30:56.867224] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:44.997 [2024-07-22 18:30:56.867590] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:44.997 18:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:45.256 malloc0 00:23:45.256 18:30:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:45.514 18:30:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.yhXg0mJ7O5 00:23:45.779 [2024-07-22 18:30:57.657425] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:45.779 18:30:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@137 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.yhXg0mJ7O5 00:23:57.976 Initializing NVMe Controllers 00:23:57.976 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:57.976 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:57.976 Initialization complete. Launching workers. 00:23:57.976 ======================================================== 00:23:57.976 Latency(us) 00:23:57.976 Device Information : IOPS MiB/s Average min max 00:23:57.976 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 6279.27 24.53 10195.66 3971.55 15352.46 00:23:57.976 ======================================================== 00:23:57.976 Total : 6279.27 24.53 10195.66 3971.55 15352.46 00:23:57.976 00:23:57.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:57.976 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.yhXg0mJ7O5 00:23:57.976 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:57.976 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:57.976 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:57.976 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.yhXg0mJ7O5' 00:23:57.976 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:57.976 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=91444 00:23:57.976 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:57.976 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:57.976 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 91444 /var/tmp/bdevperf.sock 00:23:57.976 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 91444 ']' 00:23:57.976 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:57.976 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:57.976 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:57.976 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:57.976 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:57.976 [2024-07-22 18:31:08.107928] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:23:57.976 [2024-07-22 18:31:08.108101] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91444 ] 00:23:57.976 [2024-07-22 18:31:08.275553] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:57.976 [2024-07-22 18:31:08.581108] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:57.976 18:31:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:57.976 18:31:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:57.976 18:31:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.yhXg0mJ7O5 00:23:57.976 [2024-07-22 18:31:09.287870] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:57.976 [2024-07-22 18:31:09.288067] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:57.976 TLSTESTn1 00:23:57.976 18:31:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:57.976 Running I/O for 10 seconds... 00:24:07.947 00:24:07.947 Latency(us) 00:24:07.947 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:07.947 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:07.947 Verification LBA range: start 0x0 length 0x2000 00:24:07.947 TLSTESTn1 : 10.03 2682.11 10.48 0.00 0.00 47608.46 2174.60 28478.37 00:24:07.947 =================================================================================================================== 00:24:07.947 Total : 2682.11 10.48 0.00 0.00 47608.46 2174.60 28478.37 00:24:07.947 0 00:24:07.947 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:07.947 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 91444 00:24:07.947 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 91444 ']' 00:24:07.947 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 91444 00:24:07.947 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:07.947 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:07.947 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 91444 00:24:07.947 killing process with pid 91444 00:24:07.947 Received shutdown signal, test time was about 10.000000 seconds 00:24:07.947 00:24:07.947 Latency(us) 00:24:07.947 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:07.947 =================================================================================================================== 00:24:07.947 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:07.947 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:24:07.947 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:24:07.947 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 91444' 00:24:07.947 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 91444 00:24:07.947 [2024-07-22 18:31:19.564424] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:07.947 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 91444 00:24:08.885 18:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.7KQUfF6asL 00:24:08.885 18:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:24:08.885 18:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.7KQUfF6asL 00:24:08.885 18:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:24:08.885 18:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:08.885 18:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:24:08.885 18:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:08.885 18:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.7KQUfF6asL 00:24:08.885 18:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:08.885 18:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:08.885 18:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:08.885 18:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.7KQUfF6asL' 00:24:08.885 18:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:08.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:08.885 18:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=91603 00:24:08.885 18:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:08.885 18:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:08.885 18:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 91603 /var/tmp/bdevperf.sock 00:24:08.885 18:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 91603 ']' 00:24:08.885 18:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:08.885 18:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:08.885 18:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:08.885 18:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:08.885 18:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:09.146 [2024-07-22 18:31:21.000198] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:24:09.146 [2024-07-22 18:31:21.000388] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91603 ] 00:24:09.404 [2024-07-22 18:31:21.167359] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:09.662 [2024-07-22 18:31:21.444774] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:10.226 18:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:10.226 18:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:24:10.226 18:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.7KQUfF6asL 00:24:10.226 [2024-07-22 18:31:22.144275] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:10.226 [2024-07-22 18:31:22.144514] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:10.226 [2024-07-22 18:31:22.155990] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:10.226 [2024-07-22 18:31:22.156655] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (107): Transport endpoint is not connected 00:24:10.226 [2024-07-22 18:31:22.157615] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (9): Bad file descriptor 00:24:10.226 [2024-07-22 18:31:22.158616] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:10.226 [2024-07-22 18:31:22.158676] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:24:10.226 [2024-07-22 18:31:22.158719] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:10.226 2024/07/22 18:31:22 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:/tmp/tmp.7KQUfF6asL subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:24:10.226 request: 00:24:10.226 { 00:24:10.226 "method": "bdev_nvme_attach_controller", 00:24:10.226 "params": { 00:24:10.226 "name": "TLSTEST", 00:24:10.226 "trtype": "tcp", 00:24:10.226 "traddr": "10.0.0.2", 00:24:10.226 "adrfam": "ipv4", 00:24:10.226 "trsvcid": "4420", 00:24:10.226 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:10.226 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:10.226 "prchk_reftag": false, 00:24:10.226 "prchk_guard": false, 00:24:10.226 "hdgst": false, 00:24:10.226 "ddgst": false, 00:24:10.226 "psk": "/tmp/tmp.7KQUfF6asL" 00:24:10.226 } 00:24:10.226 } 00:24:10.226 Got JSON-RPC error response 00:24:10.226 GoRPCClient: error on JSON-RPC call 00:24:10.226 18:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 91603 00:24:10.226 18:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 91603 ']' 00:24:10.226 18:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 91603 00:24:10.226 18:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:10.226 18:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:10.226 18:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 91603 00:24:10.226 killing process with pid 91603 00:24:10.226 Received shutdown signal, test time was about 10.000000 seconds 00:24:10.226 00:24:10.226 Latency(us) 00:24:10.226 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:10.226 =================================================================================================================== 00:24:10.226 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:10.226 18:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:24:10.226 18:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:24:10.226 18:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 91603' 00:24:10.226 18:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 91603 00:24:10.226 18:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 91603 00:24:10.226 [2024-07-22 18:31:22.210681] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:11.638 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:24:11.638 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:24:11.638 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:11.638 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:11.638 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:11.638 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.yhXg0mJ7O5 00:24:11.638 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:24:11.638 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.yhXg0mJ7O5 00:24:11.638 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:24:11.638 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:11.638 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:24:11.638 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:11.638 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.yhXg0mJ7O5 00:24:11.638 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:11.638 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:11.638 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:24:11.638 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.yhXg0mJ7O5' 00:24:11.638 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:11.638 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=91661 00:24:11.638 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:11.638 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:11.638 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 91661 /var/tmp/bdevperf.sock 00:24:11.638 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 91661 ']' 00:24:11.638 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:11.638 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:11.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:11.638 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:11.638 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:11.638 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:11.896 [2024-07-22 18:31:23.657404] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:24:11.896 [2024-07-22 18:31:23.657629] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91661 ] 00:24:11.896 [2024-07-22 18:31:23.833558] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:12.154 [2024-07-22 18:31:24.111662] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:12.722 18:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:12.722 18:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:24:12.722 18:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.yhXg0mJ7O5 00:24:12.980 [2024-07-22 18:31:24.810228] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:12.980 [2024-07-22 18:31:24.810423] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:12.980 [2024-07-22 18:31:24.823136] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:24:12.980 [2024-07-22 18:31:24.823207] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:24:12.980 [2024-07-22 18:31:24.823310] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:12.980 [2024-07-22 18:31:24.824085] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (107): Transport endpoint is not connected 00:24:12.980 [2024-07-22 18:31:24.825042] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (9): Bad file descriptor 00:24:12.980 [2024-07-22 18:31:24.826046] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:12.980 [2024-07-22 18:31:24.826102] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:24:12.980 [2024-07-22 18:31:24.826127] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:12.980 2024/07/22 18:31:24 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host2 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:/tmp/tmp.yhXg0mJ7O5 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:24:12.980 request: 00:24:12.980 { 00:24:12.980 "method": "bdev_nvme_attach_controller", 00:24:12.980 "params": { 00:24:12.980 "name": "TLSTEST", 00:24:12.980 "trtype": "tcp", 00:24:12.980 "traddr": "10.0.0.2", 00:24:12.980 "adrfam": "ipv4", 00:24:12.980 "trsvcid": "4420", 00:24:12.980 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:12.980 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:12.980 "prchk_reftag": false, 00:24:12.980 "prchk_guard": false, 00:24:12.981 "hdgst": false, 00:24:12.981 "ddgst": false, 00:24:12.981 "psk": "/tmp/tmp.yhXg0mJ7O5" 00:24:12.981 } 00:24:12.981 } 00:24:12.981 Got JSON-RPC error response 00:24:12.981 GoRPCClient: error on JSON-RPC call 00:24:12.981 18:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 91661 00:24:12.981 18:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 91661 ']' 00:24:12.981 18:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 91661 00:24:12.981 18:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:12.981 18:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:12.981 18:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 91661 00:24:12.981 killing process with pid 91661 00:24:12.981 Received shutdown signal, test time was about 10.000000 seconds 00:24:12.981 00:24:12.981 Latency(us) 00:24:12.981 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:12.981 =================================================================================================================== 00:24:12.981 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:12.981 18:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:24:12.981 18:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:24:12.981 18:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 91661' 00:24:12.981 18:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 91661 00:24:12.981 18:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 91661 00:24:12.981 [2024-07-22 18:31:24.871549] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:14.428 18:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:24:14.428 18:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:24:14.428 18:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:14.428 18:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:14.428 18:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:14.428 18:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.yhXg0mJ7O5 00:24:14.428 18:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:24:14.428 18:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.yhXg0mJ7O5 00:24:14.428 18:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:24:14.428 18:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:14.428 18:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:24:14.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:14.428 18:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:14.428 18:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.yhXg0mJ7O5 00:24:14.428 18:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:14.428 18:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:24:14.428 18:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:14.428 18:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.yhXg0mJ7O5' 00:24:14.428 18:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:14.428 18:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=91713 00:24:14.428 18:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:14.428 18:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:14.428 18:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 91713 /var/tmp/bdevperf.sock 00:24:14.428 18:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 91713 ']' 00:24:14.428 18:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:14.428 18:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:14.428 18:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:14.428 18:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:14.428 18:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:14.428 [2024-07-22 18:31:26.273495] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:24:14.428 [2024-07-22 18:31:26.273691] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91713 ] 00:24:14.687 [2024-07-22 18:31:26.442869] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:14.946 [2024-07-22 18:31:26.721707] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:15.204 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:15.204 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:24:15.204 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.yhXg0mJ7O5 00:24:15.464 [2024-07-22 18:31:27.407233] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:15.464 [2024-07-22 18:31:27.407490] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:15.464 [2024-07-22 18:31:27.418077] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:24:15.464 [2024-07-22 18:31:27.418129] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:24:15.464 [2024-07-22 18:31:27.418218] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:15.464 [2024-07-22 18:31:27.419193] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (107): Transport endpoint is not connected 00:24:15.464 [2024-07-22 18:31:27.420144] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (9): Bad file descriptor 00:24:15.464 [2024-07-22 18:31:27.421146] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:24:15.464 [2024-07-22 18:31:27.421203] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:24:15.464 [2024-07-22 18:31:27.421251] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:24:15.464 2024/07/22 18:31:27 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:/tmp/tmp.yhXg0mJ7O5 subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:24:15.464 request: 00:24:15.464 { 00:24:15.464 "method": "bdev_nvme_attach_controller", 00:24:15.464 "params": { 00:24:15.464 "name": "TLSTEST", 00:24:15.464 "trtype": "tcp", 00:24:15.464 "traddr": "10.0.0.2", 00:24:15.464 "adrfam": "ipv4", 00:24:15.464 "trsvcid": "4420", 00:24:15.464 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:15.464 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:15.464 "prchk_reftag": false, 00:24:15.464 "prchk_guard": false, 00:24:15.464 "hdgst": false, 00:24:15.464 "ddgst": false, 00:24:15.464 "psk": "/tmp/tmp.yhXg0mJ7O5" 00:24:15.464 } 00:24:15.464 } 00:24:15.464 Got JSON-RPC error response 00:24:15.464 GoRPCClient: error on JSON-RPC call 00:24:15.464 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 91713 00:24:15.464 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 91713 ']' 00:24:15.464 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 91713 00:24:15.464 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:15.464 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:15.464 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 91713 00:24:15.464 killing process with pid 91713 00:24:15.464 Received shutdown signal, test time was about 10.000000 seconds 00:24:15.464 00:24:15.464 Latency(us) 00:24:15.464 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:15.464 =================================================================================================================== 00:24:15.464 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:15.464 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:24:15.464 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:24:15.464 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 91713' 00:24:15.464 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 91713 00:24:15.464 [2024-07-22 18:31:27.471184] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:15.464 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 91713 00:24:16.840 18:31:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:24:16.840 18:31:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:24:16.840 18:31:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:16.840 18:31:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:16.840 18:31:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:16.840 18:31:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:24:16.840 18:31:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:24:16.840 18:31:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:24:16.840 18:31:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:24:16.840 18:31:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:16.840 18:31:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:24:16.840 18:31:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:16.840 18:31:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:24:16.840 18:31:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:16.840 18:31:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:16.840 18:31:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:16.840 18:31:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:24:16.840 18:31:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:16.840 18:31:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=91771 00:24:16.840 18:31:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:16.840 18:31:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:16.840 18:31:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 91771 /var/tmp/bdevperf.sock 00:24:16.841 18:31:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 91771 ']' 00:24:16.841 18:31:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:16.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:16.841 18:31:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:16.841 18:31:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:16.841 18:31:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:16.841 18:31:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:17.114 [2024-07-22 18:31:28.899862] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:24:17.114 [2024-07-22 18:31:28.900047] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91771 ] 00:24:17.114 [2024-07-22 18:31:29.067551] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:17.383 [2024-07-22 18:31:29.347025] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:17.950 18:31:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:17.950 18:31:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:24:17.950 18:31:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:24:18.208 [2024-07-22 18:31:30.204224] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:18.208 [2024-07-22 18:31:30.205523] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:24:18.208 [2024-07-22 18:31:30.206502] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:18.208 [2024-07-22 18:31:30.206559] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:24:18.208 [2024-07-22 18:31:30.206584] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:18.208 2024/07/22 18:31:30 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:24:18.208 request: 00:24:18.208 { 00:24:18.208 "method": "bdev_nvme_attach_controller", 00:24:18.208 "params": { 00:24:18.208 "name": "TLSTEST", 00:24:18.208 "trtype": "tcp", 00:24:18.208 "traddr": "10.0.0.2", 00:24:18.208 "adrfam": "ipv4", 00:24:18.208 "trsvcid": "4420", 00:24:18.208 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:18.208 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:18.208 "prchk_reftag": false, 00:24:18.208 "prchk_guard": false, 00:24:18.208 "hdgst": false, 00:24:18.208 "ddgst": false 00:24:18.208 } 00:24:18.208 } 00:24:18.208 Got JSON-RPC error response 00:24:18.208 GoRPCClient: error on JSON-RPC call 00:24:18.466 18:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 91771 00:24:18.466 18:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 91771 ']' 00:24:18.466 18:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 91771 00:24:18.466 18:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:18.466 18:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:18.466 18:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 91771 00:24:18.466 killing process with pid 91771 00:24:18.466 Received shutdown signal, test time was about 10.000000 seconds 00:24:18.466 00:24:18.466 Latency(us) 00:24:18.466 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:18.466 =================================================================================================================== 00:24:18.466 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:18.466 18:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:24:18.466 18:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:24:18.466 18:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 91771' 00:24:18.466 18:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 91771 00:24:18.466 18:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 91771 00:24:19.838 18:31:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:24:19.838 18:31:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:24:19.838 18:31:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:19.838 18:31:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:19.838 18:31:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:19.838 18:31:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@158 -- # killprocess 91091 00:24:19.838 18:31:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 91091 ']' 00:24:19.838 18:31:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 91091 00:24:19.838 18:31:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:19.838 18:31:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:19.838 18:31:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 91091 00:24:19.838 killing process with pid 91091 00:24:19.838 18:31:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:19.838 18:31:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:19.838 18:31:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 91091' 00:24:19.838 18:31:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 91091 00:24:19.839 [2024-07-22 18:31:31.563624] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for 18:31:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 91091 00:24:19.839 removal in v24.09 hit 1 times 00:24:21.212 18:31:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:24:21.212 18:31:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:24:21.212 18:31:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:24:21.212 18:31:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:24:21.212 18:31:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:24:21.212 18:31:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:24:21.212 18:31:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:24:21.212 18:31:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:24:21.212 18:31:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:24:21.212 18:31:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.Nr9H45DkV5 00:24:21.212 18:31:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:24:21.212 18:31:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.Nr9H45DkV5 00:24:21.212 18:31:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:24:21.212 18:31:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:21.212 18:31:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:21.212 18:31:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:21.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:21.212 18:31:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=91855 00:24:21.212 18:31:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:21.212 18:31:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 91855 00:24:21.212 18:31:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 91855 ']' 00:24:21.212 18:31:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:21.212 18:31:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:21.213 18:31:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:21.213 18:31:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:21.213 18:31:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:21.475 [2024-07-22 18:31:33.245775] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:24:21.476 [2024-07-22 18:31:33.246010] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:21.476 [2024-07-22 18:31:33.431397] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:21.737 [2024-07-22 18:31:33.729329] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:21.737 [2024-07-22 18:31:33.729445] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:21.737 [2024-07-22 18:31:33.729464] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:21.737 [2024-07-22 18:31:33.729480] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:21.737 [2024-07-22 18:31:33.729493] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:21.737 [2024-07-22 18:31:33.729558] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:22.303 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:22.303 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:24:22.303 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:22.303 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:22.303 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:22.303 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:22.303 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.Nr9H45DkV5 00:24:22.303 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.Nr9H45DkV5 00:24:22.303 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:22.562 [2024-07-22 18:31:34.442488] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:22.562 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:22.819 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:23.077 [2024-07-22 18:31:34.990742] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:23.077 [2024-07-22 18:31:34.991093] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:23.077 18:31:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:23.336 malloc0 00:24:23.336 18:31:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:23.594 18:31:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Nr9H45DkV5 00:24:23.852 [2024-07-22 18:31:35.772207] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:24:23.852 18:31:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Nr9H45DkV5 00:24:23.852 18:31:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:23.852 18:31:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:23.852 18:31:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:23.852 18:31:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.Nr9H45DkV5' 00:24:23.852 18:31:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:23.852 18:31:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=91958 00:24:23.852 18:31:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:23.852 18:31:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:23.852 18:31:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 91958 /var/tmp/bdevperf.sock 00:24:23.852 18:31:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 91958 ']' 00:24:23.852 18:31:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:23.852 18:31:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:23.852 18:31:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:23.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:23.852 18:31:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:23.852 18:31:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:24.111 [2024-07-22 18:31:35.915073] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:24:24.111 [2024-07-22 18:31:35.915285] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91958 ] 00:24:24.111 [2024-07-22 18:31:36.095600] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:24.677 [2024-07-22 18:31:36.421539] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:24.937 18:31:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:24.937 18:31:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:24:24.937 18:31:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Nr9H45DkV5 00:24:25.195 [2024-07-22 18:31:37.049345] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:25.195 [2024-07-22 18:31:37.049531] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:25.195 TLSTESTn1 00:24:25.195 18:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:24:25.452 Running I/O for 10 seconds... 00:24:35.457 00:24:35.457 Latency(us) 00:24:35.457 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:35.457 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:35.457 Verification LBA range: start 0x0 length 0x2000 00:24:35.457 TLSTESTn1 : 10.03 2653.46 10.37 0.00 0.00 48129.93 12690.15 30980.65 00:24:35.457 =================================================================================================================== 00:24:35.457 Total : 2653.46 10.37 0.00 0.00 48129.93 12690.15 30980.65 00:24:35.457 0 00:24:35.457 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:35.457 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 91958 00:24:35.457 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 91958 ']' 00:24:35.457 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 91958 00:24:35.457 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:35.457 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:35.457 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 91958 00:24:35.457 killing process with pid 91958 00:24:35.457 Received shutdown signal, test time was about 10.000000 seconds 00:24:35.457 00:24:35.457 Latency(us) 00:24:35.457 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:35.457 =================================================================================================================== 00:24:35.457 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:35.457 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:24:35.457 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:24:35.457 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 91958' 00:24:35.457 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 91958 00:24:35.457 [2024-07-22 18:31:47.347260] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:35.457 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 91958 00:24:36.829 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.Nr9H45DkV5 00:24:36.829 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Nr9H45DkV5 00:24:36.829 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:24:36.829 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Nr9H45DkV5 00:24:36.829 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:24:36.829 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:36.829 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:24:36.829 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:36.829 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Nr9H45DkV5 00:24:36.829 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:36.829 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:36.829 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:36.829 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.Nr9H45DkV5' 00:24:36.829 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:36.829 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=92113 00:24:36.829 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:36.829 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 92113 /var/tmp/bdevperf.sock 00:24:36.829 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:36.830 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 92113 ']' 00:24:36.830 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:36.830 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:36.830 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:36.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:36.830 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:36.830 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:36.830 [2024-07-22 18:31:48.812155] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:24:36.830 [2024-07-22 18:31:48.812365] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92113 ] 00:24:37.087 [2024-07-22 18:31:48.991510] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:37.345 [2024-07-22 18:31:49.266296] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:37.910 18:31:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:37.910 18:31:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:24:37.910 18:31:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Nr9H45DkV5 00:24:38.178 [2024-07-22 18:31:50.075350] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:38.179 [2024-07-22 18:31:50.075489] bdev_nvme.c:6125:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:24:38.179 [2024-07-22 18:31:50.075524] bdev_nvme.c:6230:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.Nr9H45DkV5 00:24:38.179 2024/07/22 18:31:50 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:/tmp/tmp.Nr9H45DkV5 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-1 Msg=Operation not permitted 00:24:38.179 request: 00:24:38.179 { 00:24:38.179 "method": "bdev_nvme_attach_controller", 00:24:38.179 "params": { 00:24:38.179 "name": "TLSTEST", 00:24:38.179 "trtype": "tcp", 00:24:38.179 "traddr": "10.0.0.2", 00:24:38.179 "adrfam": "ipv4", 00:24:38.179 "trsvcid": "4420", 00:24:38.179 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:38.179 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:38.179 "prchk_reftag": false, 00:24:38.179 "prchk_guard": false, 00:24:38.179 "hdgst": false, 00:24:38.179 "ddgst": false, 00:24:38.179 "psk": "/tmp/tmp.Nr9H45DkV5" 00:24:38.179 } 00:24:38.179 } 00:24:38.179 Got JSON-RPC error response 00:24:38.179 GoRPCClient: error on JSON-RPC call 00:24:38.179 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 92113 00:24:38.179 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 92113 ']' 00:24:38.179 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 92113 00:24:38.179 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:38.179 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:38.179 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 92113 00:24:38.179 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:24:38.179 killing process with pid 92113 00:24:38.179 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:24:38.179 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 92113' 00:24:38.179 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 92113 00:24:38.179 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 92113 00:24:38.179 Received shutdown signal, test time was about 10.000000 seconds 00:24:38.179 00:24:38.179 Latency(us) 00:24:38.179 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:38.179 =================================================================================================================== 00:24:38.179 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:39.567 18:31:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:24:39.567 18:31:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:24:39.567 18:31:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:39.567 18:31:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:39.567 18:31:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:39.567 18:31:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@174 -- # killprocess 91855 00:24:39.567 18:31:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 91855 ']' 00:24:39.567 18:31:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 91855 00:24:39.567 18:31:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:39.567 18:31:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:39.567 18:31:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 91855 00:24:39.567 18:31:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:39.567 killing process with pid 91855 00:24:39.567 18:31:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:39.567 18:31:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 91855' 00:24:39.567 18:31:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 91855 00:24:39.567 [2024-07-22 18:31:51.479852] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:24:39.567 18:31:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 91855 00:24:40.940 18:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:24:40.940 18:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:40.940 18:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:40.940 18:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:41.198 18:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=92189 00:24:41.198 18:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:41.198 18:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 92189 00:24:41.198 18:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 92189 ']' 00:24:41.198 18:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:41.198 18:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:41.198 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:41.198 18:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:41.198 18:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:41.198 18:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:41.198 [2024-07-22 18:31:53.071582] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:24:41.198 [2024-07-22 18:31:53.071760] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:41.458 [2024-07-22 18:31:53.240699] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:41.719 [2024-07-22 18:31:53.516381] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:41.719 [2024-07-22 18:31:53.516466] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:41.719 [2024-07-22 18:31:53.516484] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:41.719 [2024-07-22 18:31:53.516500] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:41.719 [2024-07-22 18:31:53.516512] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:41.719 [2024-07-22 18:31:53.516574] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:42.284 18:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:42.285 18:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:24:42.285 18:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:42.285 18:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:42.285 18:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:42.285 18:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:42.285 18:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.Nr9H45DkV5 00:24:42.285 18:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:24:42.285 18:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.Nr9H45DkV5 00:24:42.285 18:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:24:42.285 18:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:42.285 18:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:24:42.285 18:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:42.285 18:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.Nr9H45DkV5 00:24:42.285 18:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.Nr9H45DkV5 00:24:42.285 18:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:42.543 [2024-07-22 18:31:54.362509] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:42.543 18:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:42.800 18:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:43.058 [2024-07-22 18:31:54.886673] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:43.058 [2024-07-22 18:31:54.887050] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:43.058 18:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:43.316 malloc0 00:24:43.316 18:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:43.574 18:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Nr9H45DkV5 00:24:43.833 [2024-07-22 18:31:55.708048] tcp.c:3635:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:24:43.833 [2024-07-22 18:31:55.708118] tcp.c:3721:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:24:43.833 [2024-07-22 18:31:55.708158] subsystem.c:1052:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:24:43.833 2024/07/22 18:31:55 error on JSON-RPC call, method: nvmf_subsystem_add_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 psk:/tmp/tmp.Nr9H45DkV5], err: error received for nvmf_subsystem_add_host method, err: Code=-32603 Msg=Internal error 00:24:43.833 request: 00:24:43.833 { 00:24:43.833 "method": "nvmf_subsystem_add_host", 00:24:43.833 "params": { 00:24:43.833 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:43.833 "host": "nqn.2016-06.io.spdk:host1", 00:24:43.833 "psk": "/tmp/tmp.Nr9H45DkV5" 00:24:43.833 } 00:24:43.833 } 00:24:43.833 Got JSON-RPC error response 00:24:43.833 GoRPCClient: error on JSON-RPC call 00:24:43.833 18:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:24:43.833 18:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:43.833 18:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:43.833 18:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:43.833 18:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@180 -- # killprocess 92189 00:24:43.833 18:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 92189 ']' 00:24:43.833 18:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 92189 00:24:43.833 18:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:43.833 18:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:43.833 18:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 92189 00:24:43.833 18:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:43.833 18:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:43.833 killing process with pid 92189 00:24:43.833 18:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 92189' 00:24:43.833 18:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 92189 00:24:43.833 18:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 92189 00:24:45.207 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.Nr9H45DkV5 00:24:45.207 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:24:45.207 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:45.207 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:45.207 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:45.207 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=92318 00:24:45.207 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:45.207 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 92318 00:24:45.207 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 92318 ']' 00:24:45.207 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:45.207 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:45.207 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:45.207 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:45.207 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:45.207 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:45.465 [2024-07-22 18:31:57.292508] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:24:45.465 [2024-07-22 18:31:57.292693] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:45.465 [2024-07-22 18:31:57.465938] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:46.032 [2024-07-22 18:31:57.742010] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:46.032 [2024-07-22 18:31:57.742113] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:46.032 [2024-07-22 18:31:57.742131] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:46.032 [2024-07-22 18:31:57.742148] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:46.032 [2024-07-22 18:31:57.742160] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:46.032 [2024-07-22 18:31:57.742222] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:46.290 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:46.290 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:24:46.290 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:46.290 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:46.290 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:46.290 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:46.290 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.Nr9H45DkV5 00:24:46.290 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.Nr9H45DkV5 00:24:46.290 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:46.549 [2024-07-22 18:31:58.468704] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:46.549 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:46.808 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:47.066 [2024-07-22 18:31:58.972873] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:47.066 [2024-07-22 18:31:58.973231] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:47.066 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:47.324 malloc0 00:24:47.324 18:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:47.582 18:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Nr9H45DkV5 00:24:47.841 [2024-07-22 18:31:59.774518] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:24:47.841 18:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@187 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:47.841 18:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=92414 00:24:47.841 18:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:47.841 18:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 92414 /var/tmp/bdevperf.sock 00:24:47.841 18:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 92414 ']' 00:24:47.841 18:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:47.841 18:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:47.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:47.841 18:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:47.841 18:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:47.841 18:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:48.099 [2024-07-22 18:31:59.898893] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:24:48.099 [2024-07-22 18:31:59.899081] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92414 ] 00:24:48.099 [2024-07-22 18:32:00.070179] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:48.358 [2024-07-22 18:32:00.367378] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:48.924 18:32:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:48.924 18:32:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:24:48.924 18:32:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Nr9H45DkV5 00:24:49.181 [2024-07-22 18:32:01.009413] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:49.181 [2024-07-22 18:32:01.009620] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:49.181 TLSTESTn1 00:24:49.181 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:24:49.440 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:24:49.440 "subsystems": [ 00:24:49.440 { 00:24:49.440 "subsystem": "keyring", 00:24:49.440 "config": [] 00:24:49.440 }, 00:24:49.440 { 00:24:49.440 "subsystem": "iobuf", 00:24:49.440 "config": [ 00:24:49.440 { 00:24:49.440 "method": "iobuf_set_options", 00:24:49.440 "params": { 00:24:49.440 "large_bufsize": 135168, 00:24:49.440 "large_pool_count": 1024, 00:24:49.440 "small_bufsize": 8192, 00:24:49.440 "small_pool_count": 8192 00:24:49.440 } 00:24:49.440 } 00:24:49.440 ] 00:24:49.440 }, 00:24:49.440 { 00:24:49.440 "subsystem": "sock", 00:24:49.440 "config": [ 00:24:49.440 { 00:24:49.440 "method": "sock_set_default_impl", 00:24:49.440 "params": { 00:24:49.440 "impl_name": "posix" 00:24:49.440 } 00:24:49.440 }, 00:24:49.440 { 00:24:49.440 "method": "sock_impl_set_options", 00:24:49.440 "params": { 00:24:49.440 "enable_ktls": false, 00:24:49.440 "enable_placement_id": 0, 00:24:49.440 "enable_quickack": false, 00:24:49.440 "enable_recv_pipe": true, 00:24:49.440 "enable_zerocopy_send_client": false, 00:24:49.440 "enable_zerocopy_send_server": true, 00:24:49.440 "impl_name": "ssl", 00:24:49.440 "recv_buf_size": 4096, 00:24:49.440 "send_buf_size": 4096, 00:24:49.440 "tls_version": 0, 00:24:49.440 "zerocopy_threshold": 0 00:24:49.440 } 00:24:49.440 }, 00:24:49.440 { 00:24:49.440 "method": "sock_impl_set_options", 00:24:49.440 "params": { 00:24:49.440 "enable_ktls": false, 00:24:49.440 "enable_placement_id": 0, 00:24:49.440 "enable_quickack": false, 00:24:49.440 "enable_recv_pipe": true, 00:24:49.440 "enable_zerocopy_send_client": false, 00:24:49.440 "enable_zerocopy_send_server": true, 00:24:49.440 "impl_name": "posix", 00:24:49.440 "recv_buf_size": 2097152, 00:24:49.440 "send_buf_size": 2097152, 00:24:49.440 "tls_version": 0, 00:24:49.440 "zerocopy_threshold": 0 00:24:49.440 } 00:24:49.440 } 00:24:49.440 ] 00:24:49.440 }, 00:24:49.440 { 00:24:49.440 "subsystem": "vmd", 00:24:49.440 "config": [] 00:24:49.440 }, 00:24:49.440 { 00:24:49.440 "subsystem": "accel", 00:24:49.440 "config": [ 00:24:49.440 { 00:24:49.440 "method": "accel_set_options", 00:24:49.440 "params": { 00:24:49.440 "buf_count": 2048, 00:24:49.440 "large_cache_size": 16, 00:24:49.440 "sequence_count": 2048, 00:24:49.440 "small_cache_size": 128, 00:24:49.440 "task_count": 2048 00:24:49.440 } 00:24:49.440 } 00:24:49.440 ] 00:24:49.440 }, 00:24:49.440 { 00:24:49.440 "subsystem": "bdev", 00:24:49.440 "config": [ 00:24:49.440 { 00:24:49.440 "method": "bdev_set_options", 00:24:49.440 "params": { 00:24:49.440 "bdev_auto_examine": true, 00:24:49.440 "bdev_io_cache_size": 256, 00:24:49.440 "bdev_io_pool_size": 65535, 00:24:49.440 "iobuf_large_cache_size": 16, 00:24:49.440 "iobuf_small_cache_size": 128 00:24:49.440 } 00:24:49.440 }, 00:24:49.440 { 00:24:49.440 "method": "bdev_raid_set_options", 00:24:49.440 "params": { 00:24:49.440 "process_max_bandwidth_mb_sec": 0, 00:24:49.440 "process_window_size_kb": 1024 00:24:49.440 } 00:24:49.440 }, 00:24:49.440 { 00:24:49.440 "method": "bdev_iscsi_set_options", 00:24:49.440 "params": { 00:24:49.440 "timeout_sec": 30 00:24:49.440 } 00:24:49.440 }, 00:24:49.440 { 00:24:49.440 "method": "bdev_nvme_set_options", 00:24:49.440 "params": { 00:24:49.440 "action_on_timeout": "none", 00:24:49.440 "allow_accel_sequence": false, 00:24:49.440 "arbitration_burst": 0, 00:24:49.440 "bdev_retry_count": 3, 00:24:49.440 "ctrlr_loss_timeout_sec": 0, 00:24:49.440 "delay_cmd_submit": true, 00:24:49.440 "dhchap_dhgroups": [ 00:24:49.440 "null", 00:24:49.440 "ffdhe2048", 00:24:49.440 "ffdhe3072", 00:24:49.440 "ffdhe4096", 00:24:49.440 "ffdhe6144", 00:24:49.440 "ffdhe8192" 00:24:49.440 ], 00:24:49.440 "dhchap_digests": [ 00:24:49.440 "sha256", 00:24:49.440 "sha384", 00:24:49.440 "sha512" 00:24:49.440 ], 00:24:49.440 "disable_auto_failback": false, 00:24:49.440 "fast_io_fail_timeout_sec": 0, 00:24:49.440 "generate_uuids": false, 00:24:49.440 "high_priority_weight": 0, 00:24:49.440 "io_path_stat": false, 00:24:49.440 "io_queue_requests": 0, 00:24:49.440 "keep_alive_timeout_ms": 10000, 00:24:49.440 "low_priority_weight": 0, 00:24:49.440 "medium_priority_weight": 0, 00:24:49.440 "nvme_adminq_poll_period_us": 10000, 00:24:49.440 "nvme_error_stat": false, 00:24:49.440 "nvme_ioq_poll_period_us": 0, 00:24:49.440 "rdma_cm_event_timeout_ms": 0, 00:24:49.440 "rdma_max_cq_size": 0, 00:24:49.440 "rdma_srq_size": 0, 00:24:49.440 "reconnect_delay_sec": 0, 00:24:49.440 "timeout_admin_us": 0, 00:24:49.440 "timeout_us": 0, 00:24:49.440 "transport_ack_timeout": 0, 00:24:49.440 "transport_retry_count": 4, 00:24:49.440 "transport_tos": 0 00:24:49.440 } 00:24:49.440 }, 00:24:49.440 { 00:24:49.440 "method": "bdev_nvme_set_hotplug", 00:24:49.440 "params": { 00:24:49.440 "enable": false, 00:24:49.440 "period_us": 100000 00:24:49.441 } 00:24:49.441 }, 00:24:49.441 { 00:24:49.441 "method": "bdev_malloc_create", 00:24:49.441 "params": { 00:24:49.441 "block_size": 4096, 00:24:49.441 "dif_is_head_of_md": false, 00:24:49.441 "dif_pi_format": 0, 00:24:49.441 "dif_type": 0, 00:24:49.441 "md_size": 0, 00:24:49.441 "name": "malloc0", 00:24:49.441 "num_blocks": 8192, 00:24:49.441 "optimal_io_boundary": 0, 00:24:49.441 "physical_block_size": 4096, 00:24:49.441 "uuid": "77d92e68-7b32-4354-bb7a-b4c12ab36d31" 00:24:49.441 } 00:24:49.441 }, 00:24:49.441 { 00:24:49.441 "method": "bdev_wait_for_examine" 00:24:49.441 } 00:24:49.441 ] 00:24:49.441 }, 00:24:49.441 { 00:24:49.441 "subsystem": "nbd", 00:24:49.441 "config": [] 00:24:49.441 }, 00:24:49.441 { 00:24:49.441 "subsystem": "scheduler", 00:24:49.441 "config": [ 00:24:49.441 { 00:24:49.441 "method": "framework_set_scheduler", 00:24:49.441 "params": { 00:24:49.441 "name": "static" 00:24:49.441 } 00:24:49.441 } 00:24:49.441 ] 00:24:49.441 }, 00:24:49.441 { 00:24:49.441 "subsystem": "nvmf", 00:24:49.441 "config": [ 00:24:49.441 { 00:24:49.441 "method": "nvmf_set_config", 00:24:49.441 "params": { 00:24:49.441 "admin_cmd_passthru": { 00:24:49.441 "identify_ctrlr": false 00:24:49.441 }, 00:24:49.441 "discovery_filter": "match_any" 00:24:49.441 } 00:24:49.441 }, 00:24:49.441 { 00:24:49.441 "method": "nvmf_set_max_subsystems", 00:24:49.441 "params": { 00:24:49.441 "max_subsystems": 1024 00:24:49.441 } 00:24:49.441 }, 00:24:49.441 { 00:24:49.441 "method": "nvmf_set_crdt", 00:24:49.441 "params": { 00:24:49.441 "crdt1": 0, 00:24:49.441 "crdt2": 0, 00:24:49.441 "crdt3": 0 00:24:49.441 } 00:24:49.441 }, 00:24:49.441 { 00:24:49.441 "method": "nvmf_create_transport", 00:24:49.441 "params": { 00:24:49.441 "abort_timeout_sec": 1, 00:24:49.441 "ack_timeout": 0, 00:24:49.441 "buf_cache_size": 4294967295, 00:24:49.441 "c2h_success": false, 00:24:49.441 "data_wr_pool_size": 0, 00:24:49.441 "dif_insert_or_strip": false, 00:24:49.441 "in_capsule_data_size": 4096, 00:24:49.441 "io_unit_size": 131072, 00:24:49.441 "max_aq_depth": 128, 00:24:49.441 "max_io_qpairs_per_ctrlr": 127, 00:24:49.441 "max_io_size": 131072, 00:24:49.441 "max_queue_depth": 128, 00:24:49.441 "num_shared_buffers": 511, 00:24:49.441 "sock_priority": 0, 00:24:49.441 "trtype": "TCP", 00:24:49.441 "zcopy": false 00:24:49.441 } 00:24:49.441 }, 00:24:49.441 { 00:24:49.441 "method": "nvmf_create_subsystem", 00:24:49.441 "params": { 00:24:49.441 "allow_any_host": false, 00:24:49.441 "ana_reporting": false, 00:24:49.441 "max_cntlid": 65519, 00:24:49.441 "max_namespaces": 10, 00:24:49.441 "min_cntlid": 1, 00:24:49.441 "model_number": "SPDK bdev Controller", 00:24:49.441 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:49.441 "serial_number": "SPDK00000000000001" 00:24:49.441 } 00:24:49.441 }, 00:24:49.441 { 00:24:49.441 "method": "nvmf_subsystem_add_host", 00:24:49.441 "params": { 00:24:49.441 "host": "nqn.2016-06.io.spdk:host1", 00:24:49.441 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:49.441 "psk": "/tmp/tmp.Nr9H45DkV5" 00:24:49.441 } 00:24:49.441 }, 00:24:49.441 { 00:24:49.441 "method": "nvmf_subsystem_add_ns", 00:24:49.441 "params": { 00:24:49.441 "namespace": { 00:24:49.441 "bdev_name": "malloc0", 00:24:49.441 "nguid": "77D92E687B324354BB7AB4C12AB36D31", 00:24:49.441 "no_auto_visible": false, 00:24:49.441 "nsid": 1, 00:24:49.441 "uuid": "77d92e68-7b32-4354-bb7a-b4c12ab36d31" 00:24:49.441 }, 00:24:49.441 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:24:49.441 } 00:24:49.441 }, 00:24:49.441 { 00:24:49.441 "method": "nvmf_subsystem_add_listener", 00:24:49.441 "params": { 00:24:49.441 "listen_address": { 00:24:49.441 "adrfam": "IPv4", 00:24:49.441 "traddr": "10.0.0.2", 00:24:49.441 "trsvcid": "4420", 00:24:49.441 "trtype": "TCP" 00:24:49.441 }, 00:24:49.441 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:49.441 "secure_channel": true 00:24:49.441 } 00:24:49.441 } 00:24:49.441 ] 00:24:49.441 } 00:24:49.441 ] 00:24:49.441 }' 00:24:49.441 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:24:50.008 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:24:50.008 "subsystems": [ 00:24:50.008 { 00:24:50.008 "subsystem": "keyring", 00:24:50.008 "config": [] 00:24:50.008 }, 00:24:50.008 { 00:24:50.008 "subsystem": "iobuf", 00:24:50.008 "config": [ 00:24:50.008 { 00:24:50.008 "method": "iobuf_set_options", 00:24:50.008 "params": { 00:24:50.008 "large_bufsize": 135168, 00:24:50.008 "large_pool_count": 1024, 00:24:50.008 "small_bufsize": 8192, 00:24:50.008 "small_pool_count": 8192 00:24:50.008 } 00:24:50.008 } 00:24:50.008 ] 00:24:50.008 }, 00:24:50.008 { 00:24:50.008 "subsystem": "sock", 00:24:50.008 "config": [ 00:24:50.008 { 00:24:50.008 "method": "sock_set_default_impl", 00:24:50.008 "params": { 00:24:50.008 "impl_name": "posix" 00:24:50.008 } 00:24:50.008 }, 00:24:50.008 { 00:24:50.008 "method": "sock_impl_set_options", 00:24:50.008 "params": { 00:24:50.008 "enable_ktls": false, 00:24:50.008 "enable_placement_id": 0, 00:24:50.008 "enable_quickack": false, 00:24:50.008 "enable_recv_pipe": true, 00:24:50.008 "enable_zerocopy_send_client": false, 00:24:50.008 "enable_zerocopy_send_server": true, 00:24:50.008 "impl_name": "ssl", 00:24:50.008 "recv_buf_size": 4096, 00:24:50.008 "send_buf_size": 4096, 00:24:50.008 "tls_version": 0, 00:24:50.008 "zerocopy_threshold": 0 00:24:50.008 } 00:24:50.008 }, 00:24:50.008 { 00:24:50.008 "method": "sock_impl_set_options", 00:24:50.008 "params": { 00:24:50.008 "enable_ktls": false, 00:24:50.008 "enable_placement_id": 0, 00:24:50.008 "enable_quickack": false, 00:24:50.008 "enable_recv_pipe": true, 00:24:50.008 "enable_zerocopy_send_client": false, 00:24:50.008 "enable_zerocopy_send_server": true, 00:24:50.008 "impl_name": "posix", 00:24:50.008 "recv_buf_size": 2097152, 00:24:50.008 "send_buf_size": 2097152, 00:24:50.008 "tls_version": 0, 00:24:50.008 "zerocopy_threshold": 0 00:24:50.008 } 00:24:50.008 } 00:24:50.008 ] 00:24:50.008 }, 00:24:50.008 { 00:24:50.008 "subsystem": "vmd", 00:24:50.008 "config": [] 00:24:50.008 }, 00:24:50.008 { 00:24:50.008 "subsystem": "accel", 00:24:50.008 "config": [ 00:24:50.008 { 00:24:50.008 "method": "accel_set_options", 00:24:50.008 "params": { 00:24:50.008 "buf_count": 2048, 00:24:50.008 "large_cache_size": 16, 00:24:50.008 "sequence_count": 2048, 00:24:50.008 "small_cache_size": 128, 00:24:50.008 "task_count": 2048 00:24:50.008 } 00:24:50.008 } 00:24:50.008 ] 00:24:50.008 }, 00:24:50.008 { 00:24:50.008 "subsystem": "bdev", 00:24:50.008 "config": [ 00:24:50.008 { 00:24:50.008 "method": "bdev_set_options", 00:24:50.008 "params": { 00:24:50.008 "bdev_auto_examine": true, 00:24:50.008 "bdev_io_cache_size": 256, 00:24:50.008 "bdev_io_pool_size": 65535, 00:24:50.008 "iobuf_large_cache_size": 16, 00:24:50.008 "iobuf_small_cache_size": 128 00:24:50.008 } 00:24:50.008 }, 00:24:50.008 { 00:24:50.008 "method": "bdev_raid_set_options", 00:24:50.008 "params": { 00:24:50.008 "process_max_bandwidth_mb_sec": 0, 00:24:50.008 "process_window_size_kb": 1024 00:24:50.008 } 00:24:50.008 }, 00:24:50.008 { 00:24:50.008 "method": "bdev_iscsi_set_options", 00:24:50.008 "params": { 00:24:50.008 "timeout_sec": 30 00:24:50.008 } 00:24:50.008 }, 00:24:50.008 { 00:24:50.008 "method": "bdev_nvme_set_options", 00:24:50.008 "params": { 00:24:50.008 "action_on_timeout": "none", 00:24:50.008 "allow_accel_sequence": false, 00:24:50.008 "arbitration_burst": 0, 00:24:50.008 "bdev_retry_count": 3, 00:24:50.008 "ctrlr_loss_timeout_sec": 0, 00:24:50.008 "delay_cmd_submit": true, 00:24:50.009 "dhchap_dhgroups": [ 00:24:50.009 "null", 00:24:50.009 "ffdhe2048", 00:24:50.009 "ffdhe3072", 00:24:50.009 "ffdhe4096", 00:24:50.009 "ffdhe6144", 00:24:50.009 "ffdhe8192" 00:24:50.009 ], 00:24:50.009 "dhchap_digests": [ 00:24:50.009 "sha256", 00:24:50.009 "sha384", 00:24:50.009 "sha512" 00:24:50.009 ], 00:24:50.009 "disable_auto_failback": false, 00:24:50.009 "fast_io_fail_timeout_sec": 0, 00:24:50.009 "generate_uuids": false, 00:24:50.009 "high_priority_weight": 0, 00:24:50.009 "io_path_stat": false, 00:24:50.009 "io_queue_requests": 512, 00:24:50.009 "keep_alive_timeout_ms": 10000, 00:24:50.009 "low_priority_weight": 0, 00:24:50.009 "medium_priority_weight": 0, 00:24:50.009 "nvme_adminq_poll_period_us": 10000, 00:24:50.009 "nvme_error_stat": false, 00:24:50.009 "nvme_ioq_poll_period_us": 0, 00:24:50.009 "rdma_cm_event_timeout_ms": 0, 00:24:50.009 "rdma_max_cq_size": 0, 00:24:50.009 "rdma_srq_size": 0, 00:24:50.009 "reconnect_delay_sec": 0, 00:24:50.009 "timeout_admin_us": 0, 00:24:50.009 "timeout_us": 0, 00:24:50.009 "transport_ack_timeout": 0, 00:24:50.009 "transport_retry_count": 4, 00:24:50.009 "transport_tos": 0 00:24:50.009 } 00:24:50.009 }, 00:24:50.009 { 00:24:50.009 "method": "bdev_nvme_attach_controller", 00:24:50.009 "params": { 00:24:50.009 "adrfam": "IPv4", 00:24:50.009 "ctrlr_loss_timeout_sec": 0, 00:24:50.009 "ddgst": false, 00:24:50.009 "fast_io_fail_timeout_sec": 0, 00:24:50.009 "hdgst": false, 00:24:50.009 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:50.009 "name": "TLSTEST", 00:24:50.009 "prchk_guard": false, 00:24:50.009 "prchk_reftag": false, 00:24:50.009 "psk": "/tmp/tmp.Nr9H45DkV5", 00:24:50.009 "reconnect_delay_sec": 0, 00:24:50.009 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:50.009 "traddr": "10.0.0.2", 00:24:50.009 "trsvcid": "4420", 00:24:50.009 "trtype": "TCP" 00:24:50.009 } 00:24:50.009 }, 00:24:50.009 { 00:24:50.009 "method": "bdev_nvme_set_hotplug", 00:24:50.009 "params": { 00:24:50.009 "enable": false, 00:24:50.009 "period_us": 100000 00:24:50.009 } 00:24:50.009 }, 00:24:50.009 { 00:24:50.009 "method": "bdev_wait_for_examine" 00:24:50.009 } 00:24:50.009 ] 00:24:50.009 }, 00:24:50.009 { 00:24:50.009 "subsystem": "nbd", 00:24:50.009 "config": [] 00:24:50.009 } 00:24:50.009 ] 00:24:50.009 }' 00:24:50.009 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # killprocess 92414 00:24:50.009 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 92414 ']' 00:24:50.009 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 92414 00:24:50.009 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:50.009 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:50.009 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 92414 00:24:50.009 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:24:50.009 killing process with pid 92414 00:24:50.009 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:24:50.009 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 92414' 00:24:50.009 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 92414 00:24:50.009 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 92414 00:24:50.009 Received shutdown signal, test time was about 10.000000 seconds 00:24:50.009 00:24:50.009 Latency(us) 00:24:50.009 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:50.009 =================================================================================================================== 00:24:50.009 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:50.009 [2024-07-22 18:32:01.760978] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:51.383 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@200 -- # killprocess 92318 00:24:51.383 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 92318 ']' 00:24:51.383 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 92318 00:24:51.383 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:51.383 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:51.383 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 92318 00:24:51.383 killing process with pid 92318 00:24:51.383 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:51.383 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:51.383 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 92318' 00:24:51.383 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 92318 00:24:51.383 [2024-07-22 18:32:03.093928] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:24:51.383 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 92318 00:24:52.756 18:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:24:52.756 18:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:52.756 18:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:52.756 18:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:24:52.756 "subsystems": [ 00:24:52.756 { 00:24:52.756 "subsystem": "keyring", 00:24:52.756 "config": [] 00:24:52.756 }, 00:24:52.756 { 00:24:52.756 "subsystem": "iobuf", 00:24:52.756 "config": [ 00:24:52.756 { 00:24:52.756 "method": "iobuf_set_options", 00:24:52.756 "params": { 00:24:52.756 "large_bufsize": 135168, 00:24:52.756 "large_pool_count": 1024, 00:24:52.756 "small_bufsize": 8192, 00:24:52.756 "small_pool_count": 8192 00:24:52.756 } 00:24:52.756 } 00:24:52.756 ] 00:24:52.756 }, 00:24:52.756 { 00:24:52.756 "subsystem": "sock", 00:24:52.756 "config": [ 00:24:52.756 { 00:24:52.756 "method": "sock_set_default_impl", 00:24:52.756 "params": { 00:24:52.756 "impl_name": "posix" 00:24:52.756 } 00:24:52.756 }, 00:24:52.756 { 00:24:52.756 "method": "sock_impl_set_options", 00:24:52.756 "params": { 00:24:52.756 "enable_ktls": false, 00:24:52.756 "enable_placement_id": 0, 00:24:52.756 "enable_quickack": false, 00:24:52.756 "enable_recv_pipe": true, 00:24:52.756 "enable_zerocopy_send_client": false, 00:24:52.756 "enable_zerocopy_send_server": true, 00:24:52.756 "impl_name": "ssl", 00:24:52.756 "recv_buf_size": 4096, 00:24:52.756 "send_buf_size": 4096, 00:24:52.756 "tls_version": 0, 00:24:52.756 "zerocopy_threshold": 0 00:24:52.756 } 00:24:52.756 }, 00:24:52.756 { 00:24:52.756 "method": "sock_impl_set_options", 00:24:52.756 "params": { 00:24:52.756 "enable_ktls": false, 00:24:52.756 "enable_placement_id": 0, 00:24:52.756 "enable_quickack": false, 00:24:52.756 "enable_recv_pipe": true, 00:24:52.756 "enable_zerocopy_send_client": false, 00:24:52.756 "enable_zerocopy_send_server": true, 00:24:52.756 "impl_name": "posix", 00:24:52.756 "recv_buf_size": 2097152, 00:24:52.756 "send_buf_size": 2097152, 00:24:52.756 "tls_version": 0, 00:24:52.756 "zerocopy_threshold": 0 00:24:52.756 } 00:24:52.756 } 00:24:52.756 ] 00:24:52.756 }, 00:24:52.756 { 00:24:52.756 "subsystem": "vmd", 00:24:52.756 "config": [] 00:24:52.756 }, 00:24:52.756 { 00:24:52.756 "subsystem": "accel", 00:24:52.756 "config": [ 00:24:52.756 { 00:24:52.756 "method": "accel_set_options", 00:24:52.756 "params": { 00:24:52.756 "buf_count": 2048, 00:24:52.756 "large_cache_size": 16, 00:24:52.756 "sequence_count": 2048, 00:24:52.756 "small_cache_size": 128, 00:24:52.756 "task_count": 2048 00:24:52.756 } 00:24:52.756 } 00:24:52.756 ] 00:24:52.756 }, 00:24:52.756 { 00:24:52.756 "subsystem": "bdev", 00:24:52.756 "config": [ 00:24:52.756 { 00:24:52.756 "method": "bdev_set_options", 00:24:52.756 "params": { 00:24:52.756 "bdev_auto_examine": true, 00:24:52.756 "bdev_io_cache_size": 256, 00:24:52.756 "bdev_io_pool_size": 65535, 00:24:52.756 "iobuf_large_cache_size": 16, 00:24:52.756 "iobuf_small_cache_size": 128 00:24:52.756 } 00:24:52.756 }, 00:24:52.756 { 00:24:52.756 "method": "bdev_raid_set_options", 00:24:52.756 "params": { 00:24:52.756 "process_max_bandwidth_mb_sec": 0, 00:24:52.756 "process_window_size_kb": 1024 00:24:52.756 } 00:24:52.756 }, 00:24:52.757 { 00:24:52.757 "method": "bdev_iscsi_set_options", 00:24:52.757 "params": { 00:24:52.757 "timeout_sec": 30 00:24:52.757 } 00:24:52.757 }, 00:24:52.757 { 00:24:52.757 "method": "bdev_nvme_set_options", 00:24:52.757 "params": { 00:24:52.757 "action_on_timeout": "none", 00:24:52.757 "allow_accel_sequence": false, 00:24:52.757 "arbitration_burst": 0, 00:24:52.757 "bdev_retry_count": 3, 00:24:52.757 "ctrlr_loss_timeout_sec": 0, 00:24:52.757 "delay_cmd_submit": true, 00:24:52.757 "dhchap_dhgroups": [ 00:24:52.757 "null", 00:24:52.757 "ffdhe2048", 00:24:52.757 "ffdhe3072", 00:24:52.757 "ffdhe4096", 00:24:52.757 "ffdhe6144", 00:24:52.757 "ffdhe8192" 00:24:52.757 ], 00:24:52.757 "dhchap_digests": [ 00:24:52.757 "sha256", 00:24:52.757 "sha384", 00:24:52.757 "sha512" 00:24:52.757 ], 00:24:52.757 "disable_auto_failback": false, 00:24:52.757 "fast_io_fail_timeout_sec": 0, 00:24:52.757 "generate_uuids": false, 00:24:52.757 "high_priority_weight": 0, 00:24:52.757 "io_path_stat": false, 00:24:52.757 "io_queue_requests": 0, 00:24:52.757 "keep_alive_timeout_ms": 10000, 00:24:52.757 "low_priority_weight": 0, 00:24:52.757 "medium_priority_weight": 0, 00:24:52.757 "nvme_adminq_poll_period_us": 10000, 00:24:52.757 "nvme_error_stat": false, 00:24:52.757 "nvme_ioq_poll_period_us": 0, 00:24:52.757 "rdma_cm_event_timeout_ms": 0, 00:24:52.757 "rdma_max_cq_size": 0, 00:24:52.757 "rdma_srq_size": 0, 00:24:52.757 "reconnect_delay_sec": 0, 00:24:52.757 "timeout_admin_us": 0, 00:24:52.757 "timeout_us": 0, 18:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:52.757 00:24:52.757 "transport_ack_timeout": 0, 00:24:52.757 "transport_retry_count": 4, 00:24:52.757 "transport_tos": 0 00:24:52.757 } 00:24:52.757 }, 00:24:52.757 { 00:24:52.757 "method": "bdev_nvme_set_hotplug", 00:24:52.757 "params": { 00:24:52.757 "enable": false, 00:24:52.757 "period_us": 100000 00:24:52.757 } 00:24:52.757 }, 00:24:52.757 { 00:24:52.757 "method": "bdev_malloc_create", 00:24:52.757 "params": { 00:24:52.757 "block_size": 4096, 00:24:52.757 "dif_is_head_of_md": false, 00:24:52.757 "dif_pi_format": 0, 00:24:52.757 "dif_type": 0, 00:24:52.757 "md_size": 0, 00:24:52.757 "name": "malloc0", 00:24:52.757 "num_blocks": 8192, 00:24:52.757 "optimal_io_boundary": 0, 00:24:52.757 "physical_block_size": 4096, 00:24:52.757 "uuid": "77d92e68-7b32-4354-bb7a-b4c12ab36d31" 00:24:52.757 } 00:24:52.757 }, 00:24:52.757 { 00:24:52.757 "method": "bdev_wait_for_examine" 00:24:52.757 } 00:24:52.757 ] 00:24:52.757 }, 00:24:52.757 { 00:24:52.757 "subsystem": "nbd", 00:24:52.757 "config": [] 00:24:52.757 }, 00:24:52.757 { 00:24:52.757 "subsystem": "scheduler", 00:24:52.757 "config": [ 00:24:52.757 { 00:24:52.757 "method": "framework_set_scheduler", 00:24:52.757 "params": { 00:24:52.757 "name": "static" 00:24:52.757 } 00:24:52.757 } 00:24:52.757 ] 00:24:52.757 }, 00:24:52.757 { 00:24:52.757 "subsystem": "nvmf", 00:24:52.757 "config": [ 00:24:52.757 { 00:24:52.757 "method": "nvmf_set_config", 00:24:52.757 "params": { 00:24:52.757 "admin_cmd_passthru": { 00:24:52.757 "identify_ctrlr": false 00:24:52.757 }, 00:24:52.757 "discovery_filter": "match_any" 00:24:52.757 } 00:24:52.757 }, 00:24:52.757 { 00:24:52.757 "method": "nvmf_set_max_subsystems", 00:24:52.757 "params": { 00:24:52.757 "max_subsystems": 1024 00:24:52.757 } 00:24:52.757 }, 00:24:52.757 { 00:24:52.757 "method": "nvmf_set_crdt", 00:24:52.757 "params": { 00:24:52.757 "crdt1": 0, 00:24:52.757 "crdt2": 0, 00:24:52.757 "crdt3": 0 00:24:52.757 } 00:24:52.757 }, 00:24:52.757 { 00:24:52.757 "method": "nvmf_create_transport", 00:24:52.757 "params": { 00:24:52.757 "abort_timeout_sec": 1, 00:24:52.757 "ack_timeout": 0, 00:24:52.757 "buf_cache_size": 4294967295, 00:24:52.757 "c2h_success": false, 00:24:52.757 "data_wr_pool_size": 0, 00:24:52.757 "dif_insert_or_strip": false, 00:24:52.757 "in_capsule_data_size": 4096, 00:24:52.757 "io_unit_size": 131072, 00:24:52.757 "max_aq_depth": 128, 00:24:52.757 "max_io_qpairs_per_ctrlr": 127, 00:24:52.757 "max_io_size": 131072, 00:24:52.757 "max_queue_depth": 128, 00:24:52.757 "num_shared_buffers": 511, 00:24:52.757 "sock_priority": 0, 00:24:52.757 "trtype": "TCP", 00:24:52.757 "zcopy": false 00:24:52.757 } 00:24:52.757 }, 00:24:52.757 { 00:24:52.757 "method": "nvmf_create_subsystem", 00:24:52.757 "params": { 00:24:52.757 "allow_any_host": false, 00:24:52.757 "ana_reporting": false, 00:24:52.757 "max_cntlid": 65519, 00:24:52.757 "max_namespaces": 10, 00:24:52.757 "min_cntlid": 1, 00:24:52.757 "model_number": "SPDK bdev Controller", 00:24:52.757 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:52.757 "serial_number": "SPDK00000000000001" 00:24:52.757 } 00:24:52.757 }, 00:24:52.757 { 00:24:52.757 "method": "nvmf_subsystem_add_host", 00:24:52.757 "params": { 00:24:52.757 "host": "nqn.2016-06.io.spdk:host1", 00:24:52.757 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:52.757 "psk": "/tmp/tmp.Nr9H45DkV5" 00:24:52.757 } 00:24:52.757 }, 00:24:52.757 { 00:24:52.757 "method": "nvmf_subsystem_add_ns", 00:24:52.757 "params": { 00:24:52.757 "namespace": { 00:24:52.757 "bdev_name": "malloc0", 00:24:52.757 "nguid": "77D92E687B324354BB7AB4C12AB36D31", 00:24:52.757 "no_auto_visible": false, 00:24:52.757 "nsid": 1, 00:24:52.757 "uuid": "77d92e68-7b32-4354-bb7a-b4c12ab36d31" 00:24:52.757 }, 00:24:52.757 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:24:52.757 } 00:24:52.757 }, 00:24:52.757 { 00:24:52.757 "method": "nvmf_subsystem_add_listener", 00:24:52.757 "params": { 00:24:52.757 "listen_address": { 00:24:52.757 "adrfam": "IPv4", 00:24:52.757 "traddr": "10.0.0.2", 00:24:52.757 "trsvcid": "4420", 00:24:52.757 "trtype": "TCP" 00:24:52.757 }, 00:24:52.757 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:52.757 "secure_channel": true 00:24:52.757 } 00:24:52.757 } 00:24:52.757 ] 00:24:52.757 } 00:24:52.757 ] 00:24:52.757 }' 00:24:52.757 18:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=92517 00:24:52.757 18:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:24:52.757 18:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 92517 00:24:52.757 18:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 92517 ']' 00:24:52.757 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:52.757 18:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:52.757 18:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:52.757 18:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:52.757 18:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:52.757 18:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:52.757 [2024-07-22 18:32:04.667236] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:24:52.757 [2024-07-22 18:32:04.667563] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:53.015 [2024-07-22 18:32:04.882445] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:53.320 [2024-07-22 18:32:05.180125] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:53.320 [2024-07-22 18:32:05.180211] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:53.320 [2024-07-22 18:32:05.180246] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:53.320 [2024-07-22 18:32:05.180262] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:53.320 [2024-07-22 18:32:05.180274] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:53.320 [2024-07-22 18:32:05.180447] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:53.906 [2024-07-22 18:32:05.704732] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:53.906 [2024-07-22 18:32:05.732202] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:24:53.906 [2024-07-22 18:32:05.748191] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:53.906 [2024-07-22 18:32:05.748510] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:53.906 18:32:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:53.906 18:32:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:24:53.906 18:32:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:53.906 18:32:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:53.906 18:32:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:53.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:53.906 18:32:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:53.906 18:32:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=92561 00:24:53.906 18:32:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 92561 /var/tmp/bdevperf.sock 00:24:53.906 18:32:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 92561 ']' 00:24:53.906 18:32:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:53.906 18:32:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:53.906 18:32:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:53.906 18:32:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:53.906 18:32:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:24:53.906 18:32:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:53.906 18:32:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:24:53.906 "subsystems": [ 00:24:53.906 { 00:24:53.906 "subsystem": "keyring", 00:24:53.906 "config": [] 00:24:53.906 }, 00:24:53.906 { 00:24:53.906 "subsystem": "iobuf", 00:24:53.906 "config": [ 00:24:53.906 { 00:24:53.906 "method": "iobuf_set_options", 00:24:53.906 "params": { 00:24:53.906 "large_bufsize": 135168, 00:24:53.906 "large_pool_count": 1024, 00:24:53.906 "small_bufsize": 8192, 00:24:53.906 "small_pool_count": 8192 00:24:53.906 } 00:24:53.906 } 00:24:53.906 ] 00:24:53.906 }, 00:24:53.906 { 00:24:53.906 "subsystem": "sock", 00:24:53.906 "config": [ 00:24:53.906 { 00:24:53.906 "method": "sock_set_default_impl", 00:24:53.906 "params": { 00:24:53.906 "impl_name": "posix" 00:24:53.906 } 00:24:53.906 }, 00:24:53.906 { 00:24:53.906 "method": "sock_impl_set_options", 00:24:53.906 "params": { 00:24:53.906 "enable_ktls": false, 00:24:53.906 "enable_placement_id": 0, 00:24:53.906 "enable_quickack": false, 00:24:53.906 "enable_recv_pipe": true, 00:24:53.906 "enable_zerocopy_send_client": false, 00:24:53.906 "enable_zerocopy_send_server": true, 00:24:53.906 "impl_name": "ssl", 00:24:53.906 "recv_buf_size": 4096, 00:24:53.906 "send_buf_size": 4096, 00:24:53.906 "tls_version": 0, 00:24:53.906 "zerocopy_threshold": 0 00:24:53.906 } 00:24:53.906 }, 00:24:53.906 { 00:24:53.906 "method": "sock_impl_set_options", 00:24:53.906 "params": { 00:24:53.906 "enable_ktls": false, 00:24:53.906 "enable_placement_id": 0, 00:24:53.906 "enable_quickack": false, 00:24:53.906 "enable_recv_pipe": true, 00:24:53.906 "enable_zerocopy_send_client": false, 00:24:53.906 "enable_zerocopy_send_server": true, 00:24:53.906 "impl_name": "posix", 00:24:53.906 "recv_buf_size": 2097152, 00:24:53.906 "send_buf_size": 2097152, 00:24:53.906 "tls_version": 0, 00:24:53.906 "zerocopy_threshold": 0 00:24:53.906 } 00:24:53.907 } 00:24:53.907 ] 00:24:53.907 }, 00:24:53.907 { 00:24:53.907 "subsystem": "vmd", 00:24:53.907 "config": [] 00:24:53.907 }, 00:24:53.907 { 00:24:53.907 "subsystem": "accel", 00:24:53.907 "config": [ 00:24:53.907 { 00:24:53.907 "method": "accel_set_options", 00:24:53.907 "params": { 00:24:53.907 "buf_count": 2048, 00:24:53.907 "large_cache_size": 16, 00:24:53.907 "sequence_count": 2048, 00:24:53.907 "small_cache_size": 128, 00:24:53.907 "task_count": 2048 00:24:53.907 } 00:24:53.907 } 00:24:53.907 ] 00:24:53.907 }, 00:24:53.907 { 00:24:53.907 "subsystem": "bdev", 00:24:53.907 "config": [ 00:24:53.907 { 00:24:53.907 "method": "bdev_set_options", 00:24:53.907 "params": { 00:24:53.907 "bdev_auto_examine": true, 00:24:53.907 "bdev_io_cache_size": 256, 00:24:53.907 "bdev_io_pool_size": 65535, 00:24:53.907 "iobuf_large_cache_size": 16, 00:24:53.907 "iobuf_small_cache_size": 128 00:24:53.907 } 00:24:53.907 }, 00:24:53.907 { 00:24:53.907 "method": "bdev_raid_set_options", 00:24:53.907 "params": { 00:24:53.907 "process_max_bandwidth_mb_sec": 0, 00:24:53.907 "process_window_size_kb": 1024 00:24:53.907 } 00:24:53.907 }, 00:24:53.907 { 00:24:53.907 "method": "bdev_iscsi_set_options", 00:24:53.907 "params": { 00:24:53.907 "timeout_sec": 30 00:24:53.907 } 00:24:53.907 }, 00:24:53.907 { 00:24:53.907 "method": "bdev_nvme_set_options", 00:24:53.907 "params": { 00:24:53.907 "action_on_timeout": "none", 00:24:53.907 "allow_accel_sequence": false, 00:24:53.907 "arbitration_burst": 0, 00:24:53.907 "bdev_retry_count": 3, 00:24:53.907 "ctrlr_loss_timeout_sec": 0, 00:24:53.907 "delay_cmd_submit": true, 00:24:53.907 "dhchap_dhgroups": [ 00:24:53.907 "null", 00:24:53.907 "ffdhe2048", 00:24:53.907 "ffdhe3072", 00:24:53.907 "ffdhe4096", 00:24:53.907 "ffdhe6144", 00:24:53.907 "ffdhe8192" 00:24:53.907 ], 00:24:53.907 "dhchap_digests": [ 00:24:53.907 "sha256", 00:24:53.907 "sha384", 00:24:53.907 "sha512" 00:24:53.907 ], 00:24:53.907 "disable_auto_failback": false, 00:24:53.907 "fast_io_fail_timeout_sec": 0, 00:24:53.907 "generate_uuids": false, 00:24:53.907 "high_priority_weight": 0, 00:24:53.907 "io_path_stat": false, 00:24:53.907 "io_queue_requests": 512, 00:24:53.907 "keep_alive_timeout_ms": 10000, 00:24:53.907 "low_priority_weight": 0, 00:24:53.907 "medium_priority_weight": 0, 00:24:53.907 "nvme_adminq_poll_period_us": 10000, 00:24:53.907 "nvme_error_stat": false, 00:24:53.907 "nvme_ioq_poll_period_us": 0, 00:24:53.907 "rdma_cm_event_timeout_ms": 0, 00:24:53.907 "rdma_max_cq_size": 0, 00:24:53.907 "rdma_srq_size": 0, 00:24:53.907 "reconnect_delay_sec": 0, 00:24:53.907 "timeout_admin_us": 0, 00:24:53.907 "timeout_us": 0, 00:24:53.907 "transport_ack_timeout": 0, 00:24:53.907 "transport_retry_count": 4, 00:24:53.907 "transport_tos": 0 00:24:53.907 } 00:24:53.907 }, 00:24:53.907 { 00:24:53.907 "method": "bdev_nvme_attach_controller", 00:24:53.907 "params": { 00:24:53.907 "adrfam": "IPv4", 00:24:53.907 "ctrlr_loss_timeout_sec": 0, 00:24:53.907 "ddgst": false, 00:24:53.907 "fast_io_fail_timeout_sec": 0, 00:24:53.907 "hdgst": false, 00:24:53.907 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:53.907 "name": "TLSTEST", 00:24:53.907 "prchk_guard": false, 00:24:53.907 "prchk_reftag": false, 00:24:53.907 "psk": "/tmp/tmp.Nr9H45DkV5", 00:24:53.907 "reconnect_delay_sec": 0, 00:24:53.907 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:53.907 "traddr": "10.0.0.2", 00:24:53.907 "trsvcid": "4420", 00:24:53.907 "trtype": "TCP" 00:24:53.907 } 00:24:53.907 }, 00:24:53.907 { 00:24:53.907 "method": "bdev_nvme_set_hotplug", 00:24:53.907 "params": { 00:24:53.907 "enable": false, 00:24:53.907 "period_us": 100000 00:24:53.907 } 00:24:53.907 }, 00:24:53.907 { 00:24:53.907 "method": "bdev_wait_for_examine" 00:24:53.907 } 00:24:53.907 ] 00:24:53.907 }, 00:24:53.907 { 00:24:53.907 "subsystem": "nbd", 00:24:53.907 "config": [] 00:24:53.907 } 00:24:53.907 ] 00:24:53.907 }' 00:24:54.165 [2024-07-22 18:32:05.961335] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:24:54.165 [2024-07-22 18:32:05.961531] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92561 ] 00:24:54.165 [2024-07-22 18:32:06.130488] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:54.423 [2024-07-22 18:32:06.426369] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:54.988 [2024-07-22 18:32:06.889614] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:54.988 [2024-07-22 18:32:06.890101] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:55.245 18:32:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:55.245 18:32:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:24:55.245 18:32:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@211 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:24:55.245 Running I/O for 10 seconds... 00:25:05.231 00:25:05.231 Latency(us) 00:25:05.231 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:05.231 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:05.231 Verification LBA range: start 0x0 length 0x2000 00:25:05.231 TLSTESTn1 : 10.02 2746.44 10.73 0.00 0.00 46506.65 10128.29 39798.23 00:25:05.231 =================================================================================================================== 00:25:05.231 Total : 2746.44 10.73 0.00 0.00 46506.65 10128.29 39798.23 00:25:05.231 0 00:25:05.231 18:32:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:05.231 18:32:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@214 -- # killprocess 92561 00:25:05.231 18:32:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 92561 ']' 00:25:05.231 18:32:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 92561 00:25:05.231 18:32:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:25:05.231 18:32:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:05.231 18:32:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 92561 00:25:05.231 killing process with pid 92561 00:25:05.231 Received shutdown signal, test time was about 10.000000 seconds 00:25:05.231 00:25:05.231 Latency(us) 00:25:05.231 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:05.231 =================================================================================================================== 00:25:05.231 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:05.231 18:32:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:25:05.232 18:32:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:25:05.232 18:32:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 92561' 00:25:05.232 18:32:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 92561 00:25:05.232 [2024-07-22 18:32:17.245080] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:25:05.232 18:32:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 92561 00:25:06.607 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # killprocess 92517 00:25:06.607 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 92517 ']' 00:25:06.607 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 92517 00:25:06.607 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:25:06.607 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:06.607 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 92517 00:25:06.607 killing process with pid 92517 00:25:06.607 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:06.607 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:06.607 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 92517' 00:25:06.607 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 92517 00:25:06.607 [2024-07-22 18:32:18.615280] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:25:06.607 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 92517 00:25:08.512 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:25:08.513 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:08.513 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:08.513 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:08.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:08.513 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=92731 00:25:08.513 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 92731 00:25:08.513 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 92731 ']' 00:25:08.513 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:25:08.513 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:08.513 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:08.513 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:08.513 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:08.513 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:08.513 [2024-07-22 18:32:20.189506] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:25:08.513 [2024-07-22 18:32:20.189715] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:08.513 [2024-07-22 18:32:20.376267] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:08.771 [2024-07-22 18:32:20.677544] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:08.771 [2024-07-22 18:32:20.677625] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:08.771 [2024-07-22 18:32:20.677659] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:08.771 [2024-07-22 18:32:20.677676] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:08.771 [2024-07-22 18:32:20.677688] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:08.771 [2024-07-22 18:32:20.677745] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:09.337 18:32:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:09.337 18:32:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:25:09.337 18:32:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:09.337 18:32:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:09.337 18:32:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:09.337 18:32:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:09.337 18:32:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.Nr9H45DkV5 00:25:09.337 18:32:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.Nr9H45DkV5 00:25:09.337 18:32:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:25:09.337 [2024-07-22 18:32:21.326247] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:09.337 18:32:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:25:09.596 18:32:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:25:09.853 [2024-07-22 18:32:21.834477] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:09.853 [2024-07-22 18:32:21.834894] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:09.853 18:32:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:25:10.111 malloc0 00:25:10.369 18:32:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:25:10.369 18:32:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Nr9H45DkV5 00:25:10.627 [2024-07-22 18:32:22.605407] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:25:10.628 18:32:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:25:10.628 18:32:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=92834 00:25:10.628 18:32:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:10.628 18:32:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 92834 /var/tmp/bdevperf.sock 00:25:10.628 18:32:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 92834 ']' 00:25:10.628 18:32:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:10.628 18:32:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:10.628 18:32:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:10.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:10.628 18:32:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:10.628 18:32:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:10.885 [2024-07-22 18:32:22.725186] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:25:10.885 [2024-07-22 18:32:22.725372] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92834 ] 00:25:10.885 [2024-07-22 18:32:22.899812] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:11.453 [2024-07-22 18:32:23.189982] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:11.714 18:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:11.714 18:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:25:11.714 18:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Nr9H45DkV5 00:25:11.971 18:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@228 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:25:12.229 [2024-07-22 18:32:24.184690] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:12.487 nvme0n1 00:25:12.487 18:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@232 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:12.487 Running I/O for 1 seconds... 00:25:13.861 00:25:13.861 Latency(us) 00:25:13.861 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:13.861 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:25:13.861 Verification LBA range: start 0x0 length 0x2000 00:25:13.861 nvme0n1 : 1.04 2714.21 10.60 0.00 0.00 46475.72 9055.88 28001.75 00:25:13.861 =================================================================================================================== 00:25:13.861 Total : 2714.21 10.60 0.00 0.00 46475.72 9055.88 28001.75 00:25:13.861 0 00:25:13.861 18:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # killprocess 92834 00:25:13.861 18:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 92834 ']' 00:25:13.861 18:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 92834 00:25:13.861 18:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:25:13.861 18:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:13.861 18:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 92834 00:25:13.861 18:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:13.861 killing process with pid 92834 00:25:13.861 18:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:13.861 18:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 92834' 00:25:13.861 Received shutdown signal, test time was about 1.000000 seconds 00:25:13.861 00:25:13.861 Latency(us) 00:25:13.861 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:13.861 =================================================================================================================== 00:25:13.861 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:13.861 18:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 92834 00:25:13.861 18:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 92834 00:25:14.792 18:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@235 -- # killprocess 92731 00:25:14.792 18:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 92731 ']' 00:25:14.792 18:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 92731 00:25:14.792 18:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:25:14.792 18:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:14.792 18:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 92731 00:25:15.049 18:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:15.049 killing process with pid 92731 00:25:15.049 18:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:15.049 18:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 92731' 00:25:15.049 18:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 92731 00:25:15.049 [2024-07-22 18:32:26.813440] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:25:15.049 18:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 92731 00:25:16.425 18:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@240 -- # nvmfappstart 00:25:16.425 18:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:16.425 18:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:16.425 18:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:16.425 18:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=92928 00:25:16.425 18:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:25:16.425 18:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 92928 00:25:16.425 18:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 92928 ']' 00:25:16.425 18:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:16.425 18:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:16.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:16.425 18:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:16.426 18:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:16.426 18:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:16.426 [2024-07-22 18:32:28.354539] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:25:16.426 [2024-07-22 18:32:28.354758] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:16.684 [2024-07-22 18:32:28.535284] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:16.942 [2024-07-22 18:32:28.854922] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:16.942 [2024-07-22 18:32:28.854999] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:16.942 [2024-07-22 18:32:28.855017] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:16.942 [2024-07-22 18:32:28.855032] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:16.942 [2024-07-22 18:32:28.855044] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:16.942 [2024-07-22 18:32:28.855103] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:17.508 18:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:17.508 18:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:25:17.508 18:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:17.508 18:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:17.508 18:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:17.508 18:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:17.508 18:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@241 -- # rpc_cmd 00:25:17.508 18:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:17.508 18:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:17.508 [2024-07-22 18:32:29.333109] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:17.508 malloc0 00:25:17.508 [2024-07-22 18:32:29.393805] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:17.508 [2024-07-22 18:32:29.394172] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:17.508 18:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:17.508 18:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # bdevperf_pid=92978 00:25:17.508 18:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@252 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:25:17.508 18:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # waitforlisten 92978 /var/tmp/bdevperf.sock 00:25:17.508 18:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 92978 ']' 00:25:17.508 18:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:17.508 18:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:17.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:17.508 18:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:17.508 18:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:17.508 18:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:17.766 [2024-07-22 18:32:29.544401] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:25:17.766 [2024-07-22 18:32:29.544610] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92978 ] 00:25:17.767 [2024-07-22 18:32:29.724095] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:18.024 [2024-07-22 18:32:30.024509] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:18.590 18:32:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:18.590 18:32:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:25:18.590 18:32:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Nr9H45DkV5 00:25:18.859 18:32:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:25:19.118 [2024-07-22 18:32:30.974882] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:19.118 nvme0n1 00:25:19.118 18:32:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@262 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:19.376 Running I/O for 1 seconds... 00:25:20.311 00:25:20.311 Latency(us) 00:25:20.311 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:20.311 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:25:20.311 Verification LBA range: start 0x0 length 0x2000 00:25:20.311 nvme0n1 : 1.04 2703.18 10.56 0.00 0.00 46660.53 8996.31 28001.75 00:25:20.311 =================================================================================================================== 00:25:20.311 Total : 2703.18 10.56 0.00 0.00 46660.53 8996.31 28001.75 00:25:20.311 0 00:25:20.311 18:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # rpc_cmd save_config 00:25:20.311 18:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.311 18:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:20.569 18:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.569 18:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # tgtcfg='{ 00:25:20.569 "subsystems": [ 00:25:20.569 { 00:25:20.569 "subsystem": "keyring", 00:25:20.569 "config": [ 00:25:20.569 { 00:25:20.569 "method": "keyring_file_add_key", 00:25:20.569 "params": { 00:25:20.569 "name": "key0", 00:25:20.569 "path": "/tmp/tmp.Nr9H45DkV5" 00:25:20.569 } 00:25:20.569 } 00:25:20.569 ] 00:25:20.569 }, 00:25:20.569 { 00:25:20.569 "subsystem": "iobuf", 00:25:20.569 "config": [ 00:25:20.570 { 00:25:20.570 "method": "iobuf_set_options", 00:25:20.570 "params": { 00:25:20.570 "large_bufsize": 135168, 00:25:20.570 "large_pool_count": 1024, 00:25:20.570 "small_bufsize": 8192, 00:25:20.570 "small_pool_count": 8192 00:25:20.570 } 00:25:20.570 } 00:25:20.570 ] 00:25:20.570 }, 00:25:20.570 { 00:25:20.570 "subsystem": "sock", 00:25:20.570 "config": [ 00:25:20.570 { 00:25:20.570 "method": "sock_set_default_impl", 00:25:20.570 "params": { 00:25:20.570 "impl_name": "posix" 00:25:20.570 } 00:25:20.570 }, 00:25:20.570 { 00:25:20.570 "method": "sock_impl_set_options", 00:25:20.570 "params": { 00:25:20.570 "enable_ktls": false, 00:25:20.570 "enable_placement_id": 0, 00:25:20.570 "enable_quickack": false, 00:25:20.570 "enable_recv_pipe": true, 00:25:20.570 "enable_zerocopy_send_client": false, 00:25:20.570 "enable_zerocopy_send_server": true, 00:25:20.570 "impl_name": "ssl", 00:25:20.570 "recv_buf_size": 4096, 00:25:20.570 "send_buf_size": 4096, 00:25:20.570 "tls_version": 0, 00:25:20.570 "zerocopy_threshold": 0 00:25:20.570 } 00:25:20.570 }, 00:25:20.570 { 00:25:20.570 "method": "sock_impl_set_options", 00:25:20.570 "params": { 00:25:20.570 "enable_ktls": false, 00:25:20.570 "enable_placement_id": 0, 00:25:20.570 "enable_quickack": false, 00:25:20.570 "enable_recv_pipe": true, 00:25:20.570 "enable_zerocopy_send_client": false, 00:25:20.570 "enable_zerocopy_send_server": true, 00:25:20.570 "impl_name": "posix", 00:25:20.570 "recv_buf_size": 2097152, 00:25:20.570 "send_buf_size": 2097152, 00:25:20.570 "tls_version": 0, 00:25:20.570 "zerocopy_threshold": 0 00:25:20.570 } 00:25:20.570 } 00:25:20.570 ] 00:25:20.570 }, 00:25:20.570 { 00:25:20.570 "subsystem": "vmd", 00:25:20.570 "config": [] 00:25:20.570 }, 00:25:20.570 { 00:25:20.570 "subsystem": "accel", 00:25:20.570 "config": [ 00:25:20.570 { 00:25:20.570 "method": "accel_set_options", 00:25:20.570 "params": { 00:25:20.570 "buf_count": 2048, 00:25:20.570 "large_cache_size": 16, 00:25:20.570 "sequence_count": 2048, 00:25:20.570 "small_cache_size": 128, 00:25:20.570 "task_count": 2048 00:25:20.570 } 00:25:20.570 } 00:25:20.570 ] 00:25:20.570 }, 00:25:20.570 { 00:25:20.570 "subsystem": "bdev", 00:25:20.570 "config": [ 00:25:20.570 { 00:25:20.570 "method": "bdev_set_options", 00:25:20.570 "params": { 00:25:20.570 "bdev_auto_examine": true, 00:25:20.570 "bdev_io_cache_size": 256, 00:25:20.570 "bdev_io_pool_size": 65535, 00:25:20.570 "iobuf_large_cache_size": 16, 00:25:20.570 "iobuf_small_cache_size": 128 00:25:20.570 } 00:25:20.570 }, 00:25:20.570 { 00:25:20.570 "method": "bdev_raid_set_options", 00:25:20.570 "params": { 00:25:20.570 "process_max_bandwidth_mb_sec": 0, 00:25:20.570 "process_window_size_kb": 1024 00:25:20.570 } 00:25:20.570 }, 00:25:20.570 { 00:25:20.570 "method": "bdev_iscsi_set_options", 00:25:20.570 "params": { 00:25:20.570 "timeout_sec": 30 00:25:20.570 } 00:25:20.570 }, 00:25:20.570 { 00:25:20.570 "method": "bdev_nvme_set_options", 00:25:20.570 "params": { 00:25:20.570 "action_on_timeout": "none", 00:25:20.570 "allow_accel_sequence": false, 00:25:20.570 "arbitration_burst": 0, 00:25:20.570 "bdev_retry_count": 3, 00:25:20.570 "ctrlr_loss_timeout_sec": 0, 00:25:20.570 "delay_cmd_submit": true, 00:25:20.570 "dhchap_dhgroups": [ 00:25:20.570 "null", 00:25:20.570 "ffdhe2048", 00:25:20.570 "ffdhe3072", 00:25:20.570 "ffdhe4096", 00:25:20.570 "ffdhe6144", 00:25:20.570 "ffdhe8192" 00:25:20.570 ], 00:25:20.570 "dhchap_digests": [ 00:25:20.570 "sha256", 00:25:20.570 "sha384", 00:25:20.570 "sha512" 00:25:20.570 ], 00:25:20.570 "disable_auto_failback": false, 00:25:20.570 "fast_io_fail_timeout_sec": 0, 00:25:20.570 "generate_uuids": false, 00:25:20.570 "high_priority_weight": 0, 00:25:20.570 "io_path_stat": false, 00:25:20.570 "io_queue_requests": 0, 00:25:20.570 "keep_alive_timeout_ms": 10000, 00:25:20.570 "low_priority_weight": 0, 00:25:20.570 "medium_priority_weight": 0, 00:25:20.570 "nvme_adminq_poll_period_us": 10000, 00:25:20.570 "nvme_error_stat": false, 00:25:20.570 "nvme_ioq_poll_period_us": 0, 00:25:20.570 "rdma_cm_event_timeout_ms": 0, 00:25:20.570 "rdma_max_cq_size": 0, 00:25:20.570 "rdma_srq_size": 0, 00:25:20.570 "reconnect_delay_sec": 0, 00:25:20.570 "timeout_admin_us": 0, 00:25:20.570 "timeout_us": 0, 00:25:20.570 "transport_ack_timeout": 0, 00:25:20.570 "transport_retry_count": 4, 00:25:20.570 "transport_tos": 0 00:25:20.570 } 00:25:20.570 }, 00:25:20.570 { 00:25:20.570 "method": "bdev_nvme_set_hotplug", 00:25:20.570 "params": { 00:25:20.570 "enable": false, 00:25:20.570 "period_us": 100000 00:25:20.570 } 00:25:20.570 }, 00:25:20.570 { 00:25:20.570 "method": "bdev_malloc_create", 00:25:20.570 "params": { 00:25:20.570 "block_size": 4096, 00:25:20.570 "dif_is_head_of_md": false, 00:25:20.570 "dif_pi_format": 0, 00:25:20.570 "dif_type": 0, 00:25:20.570 "md_size": 0, 00:25:20.570 "name": "malloc0", 00:25:20.570 "num_blocks": 8192, 00:25:20.570 "optimal_io_boundary": 0, 00:25:20.570 "physical_block_size": 4096, 00:25:20.570 "uuid": "14b640f2-6ba0-4838-b459-85a8e43273ae" 00:25:20.570 } 00:25:20.570 }, 00:25:20.570 { 00:25:20.570 "method": "bdev_wait_for_examine" 00:25:20.570 } 00:25:20.570 ] 00:25:20.570 }, 00:25:20.570 { 00:25:20.570 "subsystem": "nbd", 00:25:20.570 "config": [] 00:25:20.570 }, 00:25:20.570 { 00:25:20.570 "subsystem": "scheduler", 00:25:20.570 "config": [ 00:25:20.570 { 00:25:20.570 "method": "framework_set_scheduler", 00:25:20.570 "params": { 00:25:20.570 "name": "static" 00:25:20.570 } 00:25:20.570 } 00:25:20.570 ] 00:25:20.570 }, 00:25:20.570 { 00:25:20.570 "subsystem": "nvmf", 00:25:20.570 "config": [ 00:25:20.570 { 00:25:20.570 "method": "nvmf_set_config", 00:25:20.570 "params": { 00:25:20.570 "admin_cmd_passthru": { 00:25:20.570 "identify_ctrlr": false 00:25:20.570 }, 00:25:20.570 "discovery_filter": "match_any" 00:25:20.570 } 00:25:20.570 }, 00:25:20.570 { 00:25:20.570 "method": "nvmf_set_max_subsystems", 00:25:20.570 "params": { 00:25:20.570 "max_subsystems": 1024 00:25:20.570 } 00:25:20.570 }, 00:25:20.570 { 00:25:20.570 "method": "nvmf_set_crdt", 00:25:20.570 "params": { 00:25:20.570 "crdt1": 0, 00:25:20.570 "crdt2": 0, 00:25:20.570 "crdt3": 0 00:25:20.570 } 00:25:20.570 }, 00:25:20.570 { 00:25:20.570 "method": "nvmf_create_transport", 00:25:20.570 "params": { 00:25:20.570 "abort_timeout_sec": 1, 00:25:20.570 "ack_timeout": 0, 00:25:20.570 "buf_cache_size": 4294967295, 00:25:20.570 "c2h_success": false, 00:25:20.570 "data_wr_pool_size": 0, 00:25:20.570 "dif_insert_or_strip": false, 00:25:20.570 "in_capsule_data_size": 4096, 00:25:20.570 "io_unit_size": 131072, 00:25:20.570 "max_aq_depth": 128, 00:25:20.570 "max_io_qpairs_per_ctrlr": 127, 00:25:20.570 "max_io_size": 131072, 00:25:20.570 "max_queue_depth": 128, 00:25:20.570 "num_shared_buffers": 511, 00:25:20.570 "sock_priority": 0, 00:25:20.570 "trtype": "TCP", 00:25:20.570 "zcopy": false 00:25:20.570 } 00:25:20.570 }, 00:25:20.570 { 00:25:20.570 "method": "nvmf_create_subsystem", 00:25:20.570 "params": { 00:25:20.570 "allow_any_host": false, 00:25:20.570 "ana_reporting": false, 00:25:20.570 "max_cntlid": 65519, 00:25:20.570 "max_namespaces": 32, 00:25:20.570 "min_cntlid": 1, 00:25:20.570 "model_number": "SPDK bdev Controller", 00:25:20.570 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:20.570 "serial_number": "00000000000000000000" 00:25:20.570 } 00:25:20.570 }, 00:25:20.570 { 00:25:20.570 "method": "nvmf_subsystem_add_host", 00:25:20.570 "params": { 00:25:20.570 "host": "nqn.2016-06.io.spdk:host1", 00:25:20.570 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:20.570 "psk": "key0" 00:25:20.570 } 00:25:20.570 }, 00:25:20.570 { 00:25:20.570 "method": "nvmf_subsystem_add_ns", 00:25:20.570 "params": { 00:25:20.570 "namespace": { 00:25:20.570 "bdev_name": "malloc0", 00:25:20.570 "nguid": "14B640F26BA04838B45985A8E43273AE", 00:25:20.570 "no_auto_visible": false, 00:25:20.570 "nsid": 1, 00:25:20.570 "uuid": "14b640f2-6ba0-4838-b459-85a8e43273ae" 00:25:20.570 }, 00:25:20.570 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:25:20.570 } 00:25:20.570 }, 00:25:20.570 { 00:25:20.570 "method": "nvmf_subsystem_add_listener", 00:25:20.570 "params": { 00:25:20.570 "listen_address": { 00:25:20.570 "adrfam": "IPv4", 00:25:20.570 "traddr": "10.0.0.2", 00:25:20.570 "trsvcid": "4420", 00:25:20.570 "trtype": "TCP" 00:25:20.570 }, 00:25:20.570 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:20.570 "secure_channel": false, 00:25:20.570 "sock_impl": "ssl" 00:25:20.570 } 00:25:20.570 } 00:25:20.570 ] 00:25:20.570 } 00:25:20.570 ] 00:25:20.571 }' 00:25:20.571 18:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:25:20.830 18:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # bperfcfg='{ 00:25:20.830 "subsystems": [ 00:25:20.830 { 00:25:20.830 "subsystem": "keyring", 00:25:20.830 "config": [ 00:25:20.830 { 00:25:20.830 "method": "keyring_file_add_key", 00:25:20.830 "params": { 00:25:20.830 "name": "key0", 00:25:20.830 "path": "/tmp/tmp.Nr9H45DkV5" 00:25:20.830 } 00:25:20.830 } 00:25:20.830 ] 00:25:20.830 }, 00:25:20.830 { 00:25:20.830 "subsystem": "iobuf", 00:25:20.830 "config": [ 00:25:20.830 { 00:25:20.830 "method": "iobuf_set_options", 00:25:20.830 "params": { 00:25:20.830 "large_bufsize": 135168, 00:25:20.830 "large_pool_count": 1024, 00:25:20.830 "small_bufsize": 8192, 00:25:20.830 "small_pool_count": 8192 00:25:20.830 } 00:25:20.830 } 00:25:20.830 ] 00:25:20.830 }, 00:25:20.830 { 00:25:20.830 "subsystem": "sock", 00:25:20.830 "config": [ 00:25:20.830 { 00:25:20.830 "method": "sock_set_default_impl", 00:25:20.830 "params": { 00:25:20.830 "impl_name": "posix" 00:25:20.830 } 00:25:20.830 }, 00:25:20.830 { 00:25:20.830 "method": "sock_impl_set_options", 00:25:20.830 "params": { 00:25:20.830 "enable_ktls": false, 00:25:20.830 "enable_placement_id": 0, 00:25:20.830 "enable_quickack": false, 00:25:20.830 "enable_recv_pipe": true, 00:25:20.830 "enable_zerocopy_send_client": false, 00:25:20.830 "enable_zerocopy_send_server": true, 00:25:20.830 "impl_name": "ssl", 00:25:20.830 "recv_buf_size": 4096, 00:25:20.830 "send_buf_size": 4096, 00:25:20.830 "tls_version": 0, 00:25:20.830 "zerocopy_threshold": 0 00:25:20.830 } 00:25:20.830 }, 00:25:20.830 { 00:25:20.830 "method": "sock_impl_set_options", 00:25:20.830 "params": { 00:25:20.830 "enable_ktls": false, 00:25:20.830 "enable_placement_id": 0, 00:25:20.830 "enable_quickack": false, 00:25:20.830 "enable_recv_pipe": true, 00:25:20.830 "enable_zerocopy_send_client": false, 00:25:20.830 "enable_zerocopy_send_server": true, 00:25:20.830 "impl_name": "posix", 00:25:20.830 "recv_buf_size": 2097152, 00:25:20.830 "send_buf_size": 2097152, 00:25:20.830 "tls_version": 0, 00:25:20.830 "zerocopy_threshold": 0 00:25:20.830 } 00:25:20.830 } 00:25:20.830 ] 00:25:20.830 }, 00:25:20.830 { 00:25:20.830 "subsystem": "vmd", 00:25:20.830 "config": [] 00:25:20.830 }, 00:25:20.830 { 00:25:20.830 "subsystem": "accel", 00:25:20.830 "config": [ 00:25:20.830 { 00:25:20.830 "method": "accel_set_options", 00:25:20.830 "params": { 00:25:20.830 "buf_count": 2048, 00:25:20.830 "large_cache_size": 16, 00:25:20.830 "sequence_count": 2048, 00:25:20.830 "small_cache_size": 128, 00:25:20.830 "task_count": 2048 00:25:20.830 } 00:25:20.830 } 00:25:20.830 ] 00:25:20.830 }, 00:25:20.830 { 00:25:20.830 "subsystem": "bdev", 00:25:20.830 "config": [ 00:25:20.830 { 00:25:20.830 "method": "bdev_set_options", 00:25:20.830 "params": { 00:25:20.830 "bdev_auto_examine": true, 00:25:20.830 "bdev_io_cache_size": 256, 00:25:20.830 "bdev_io_pool_size": 65535, 00:25:20.830 "iobuf_large_cache_size": 16, 00:25:20.830 "iobuf_small_cache_size": 128 00:25:20.830 } 00:25:20.830 }, 00:25:20.830 { 00:25:20.830 "method": "bdev_raid_set_options", 00:25:20.830 "params": { 00:25:20.830 "process_max_bandwidth_mb_sec": 0, 00:25:20.830 "process_window_size_kb": 1024 00:25:20.830 } 00:25:20.830 }, 00:25:20.830 { 00:25:20.830 "method": "bdev_iscsi_set_options", 00:25:20.830 "params": { 00:25:20.830 "timeout_sec": 30 00:25:20.830 } 00:25:20.830 }, 00:25:20.830 { 00:25:20.830 "method": "bdev_nvme_set_options", 00:25:20.830 "params": { 00:25:20.830 "action_on_timeout": "none", 00:25:20.830 "allow_accel_sequence": false, 00:25:20.830 "arbitration_burst": 0, 00:25:20.830 "bdev_retry_count": 3, 00:25:20.830 "ctrlr_loss_timeout_sec": 0, 00:25:20.830 "delay_cmd_submit": true, 00:25:20.830 "dhchap_dhgroups": [ 00:25:20.830 "null", 00:25:20.830 "ffdhe2048", 00:25:20.830 "ffdhe3072", 00:25:20.830 "ffdhe4096", 00:25:20.830 "ffdhe6144", 00:25:20.830 "ffdhe8192" 00:25:20.830 ], 00:25:20.830 "dhchap_digests": [ 00:25:20.830 "sha256", 00:25:20.830 "sha384", 00:25:20.830 "sha512" 00:25:20.830 ], 00:25:20.830 "disable_auto_failback": false, 00:25:20.830 "fast_io_fail_timeout_sec": 0, 00:25:20.830 "generate_uuids": false, 00:25:20.830 "high_priority_weight": 0, 00:25:20.830 "io_path_stat": false, 00:25:20.830 "io_queue_requests": 512, 00:25:20.830 "keep_alive_timeout_ms": 10000, 00:25:20.830 "low_priority_weight": 0, 00:25:20.830 "medium_priority_weight": 0, 00:25:20.830 "nvme_adminq_poll_period_us": 10000, 00:25:20.830 "nvme_error_stat": false, 00:25:20.830 "nvme_ioq_poll_period_us": 0, 00:25:20.830 "rdma_cm_event_timeout_ms": 0, 00:25:20.830 "rdma_max_cq_size": 0, 00:25:20.830 "rdma_srq_size": 0, 00:25:20.830 "reconnect_delay_sec": 0, 00:25:20.830 "timeout_admin_us": 0, 00:25:20.830 "timeout_us": 0, 00:25:20.830 "transport_ack_timeout": 0, 00:25:20.830 "transport_retry_count": 4, 00:25:20.830 "transport_tos": 0 00:25:20.830 } 00:25:20.830 }, 00:25:20.830 { 00:25:20.830 "method": "bdev_nvme_attach_controller", 00:25:20.830 "params": { 00:25:20.830 "adrfam": "IPv4", 00:25:20.830 "ctrlr_loss_timeout_sec": 0, 00:25:20.830 "ddgst": false, 00:25:20.830 "fast_io_fail_timeout_sec": 0, 00:25:20.830 "hdgst": false, 00:25:20.830 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:20.830 "name": "nvme0", 00:25:20.830 "prchk_guard": false, 00:25:20.830 "prchk_reftag": false, 00:25:20.830 "psk": "key0", 00:25:20.830 "reconnect_delay_sec": 0, 00:25:20.830 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:20.830 "traddr": "10.0.0.2", 00:25:20.830 "trsvcid": "4420", 00:25:20.830 "trtype": "TCP" 00:25:20.830 } 00:25:20.830 }, 00:25:20.830 { 00:25:20.830 "method": "bdev_nvme_set_hotplug", 00:25:20.831 "params": { 00:25:20.831 "enable": false, 00:25:20.831 "period_us": 100000 00:25:20.831 } 00:25:20.831 }, 00:25:20.831 { 00:25:20.831 "method": "bdev_enable_histogram", 00:25:20.831 "params": { 00:25:20.831 "enable": true, 00:25:20.831 "name": "nvme0n1" 00:25:20.831 } 00:25:20.831 }, 00:25:20.831 { 00:25:20.831 "method": "bdev_wait_for_examine" 00:25:20.831 } 00:25:20.831 ] 00:25:20.831 }, 00:25:20.831 { 00:25:20.831 "subsystem": "nbd", 00:25:20.831 "config": [] 00:25:20.831 } 00:25:20.831 ] 00:25:20.831 }' 00:25:20.831 18:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # killprocess 92978 00:25:20.831 18:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 92978 ']' 00:25:20.831 18:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 92978 00:25:20.831 18:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:25:20.831 18:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:20.831 18:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 92978 00:25:20.831 killing process with pid 92978 00:25:20.831 Received shutdown signal, test time was about 1.000000 seconds 00:25:20.831 00:25:20.831 Latency(us) 00:25:20.831 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:20.831 =================================================================================================================== 00:25:20.831 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:20.831 18:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:20.831 18:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:20.831 18:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 92978' 00:25:20.831 18:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 92978 00:25:20.831 18:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 92978 00:25:22.205 18:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@269 -- # killprocess 92928 00:25:22.205 18:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 92928 ']' 00:25:22.205 18:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 92928 00:25:22.205 18:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:25:22.205 18:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:22.205 18:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 92928 00:25:22.205 killing process with pid 92928 00:25:22.205 18:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:22.205 18:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:22.205 18:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 92928' 00:25:22.205 18:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 92928 00:25:22.205 18:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 92928 00:25:23.581 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # nvmfappstart -c /dev/fd/62 00:25:23.581 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:23.581 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # echo '{ 00:25:23.581 "subsystems": [ 00:25:23.581 { 00:25:23.581 "subsystem": "keyring", 00:25:23.581 "config": [ 00:25:23.581 { 00:25:23.581 "method": "keyring_file_add_key", 00:25:23.581 "params": { 00:25:23.581 "name": "key0", 00:25:23.581 "path": "/tmp/tmp.Nr9H45DkV5" 00:25:23.581 } 00:25:23.581 } 00:25:23.581 ] 00:25:23.581 }, 00:25:23.581 { 00:25:23.581 "subsystem": "iobuf", 00:25:23.581 "config": [ 00:25:23.581 { 00:25:23.581 "method": "iobuf_set_options", 00:25:23.581 "params": { 00:25:23.581 "large_bufsize": 135168, 00:25:23.581 "large_pool_count": 1024, 00:25:23.581 "small_bufsize": 8192, 00:25:23.581 "small_pool_count": 8192 00:25:23.581 } 00:25:23.581 } 00:25:23.581 ] 00:25:23.581 }, 00:25:23.581 { 00:25:23.581 "subsystem": "sock", 00:25:23.581 "config": [ 00:25:23.581 { 00:25:23.581 "method": "sock_set_default_impl", 00:25:23.581 "params": { 00:25:23.581 "impl_name": "posix" 00:25:23.581 } 00:25:23.581 }, 00:25:23.581 { 00:25:23.581 "method": "sock_impl_set_options", 00:25:23.581 "params": { 00:25:23.581 "enable_ktls": false, 00:25:23.581 "enable_placement_id": 0, 00:25:23.581 "enable_quickack": false, 00:25:23.581 "enable_recv_pipe": true, 00:25:23.581 "enable_zerocopy_send_client": false, 00:25:23.581 "enable_zerocopy_send_server": true, 00:25:23.581 "impl_name": "ssl", 00:25:23.581 "recv_buf_size": 4096, 00:25:23.581 "send_buf_size": 4096, 00:25:23.581 "tls_version": 0, 00:25:23.581 "zerocopy_threshold": 0 00:25:23.581 } 00:25:23.581 }, 00:25:23.581 { 00:25:23.581 "method": "sock_impl_set_options", 00:25:23.581 "params": { 00:25:23.581 "enable_ktls": false, 00:25:23.581 "enable_placement_id": 0, 00:25:23.581 "enable_quickack": false, 00:25:23.581 "enable_recv_pipe": true, 00:25:23.581 "enable_zerocopy_send_client": false, 00:25:23.581 "enable_zerocopy_send_server": true, 00:25:23.581 "impl_name": "posix", 00:25:23.581 "recv_buf_size": 2097152, 00:25:23.581 "send_buf_size": 2097152, 00:25:23.581 "tls_version": 0, 00:25:23.581 "zerocopy_threshold": 0 00:25:23.581 } 00:25:23.581 } 00:25:23.581 ] 00:25:23.581 }, 00:25:23.581 { 00:25:23.581 "subsystem": "vmd", 00:25:23.581 "config": [] 00:25:23.581 }, 00:25:23.581 { 00:25:23.581 "subsystem": "accel", 00:25:23.581 "config": [ 00:25:23.581 { 00:25:23.581 "method": "accel_set_options", 00:25:23.581 "params": { 00:25:23.581 "buf_count": 2048, 00:25:23.581 "large_cache_size": 16, 00:25:23.581 "sequence_count": 2048, 00:25:23.581 "small_cache_size": 128, 00:25:23.581 "task_count": 2048 00:25:23.581 } 00:25:23.581 } 00:25:23.581 ] 00:25:23.581 }, 00:25:23.581 { 00:25:23.581 "subsystem": "bdev", 00:25:23.581 "config": [ 00:25:23.581 { 00:25:23.581 "method": "bdev_set_options", 00:25:23.581 "params": { 00:25:23.581 "bdev_auto_examine": true, 00:25:23.581 "bdev_io_cache_size": 256, 00:25:23.581 "bdev_io_pool_size": 65535, 00:25:23.581 "iobuf_large_cache_size": 16, 00:25:23.581 "iobuf_small_cache_size": 128 00:25:23.581 } 00:25:23.581 }, 00:25:23.581 { 00:25:23.581 "method": "bdev_raid_set_options", 00:25:23.581 "params": { 00:25:23.581 "process_max_bandwidth_mb_sec": 0, 00:25:23.581 "process_window_size_kb": 1024 00:25:23.581 } 00:25:23.581 }, 00:25:23.581 { 00:25:23.581 "method": "bdev_iscsi_set_options", 00:25:23.581 "params": { 00:25:23.581 "timeout_sec": 30 00:25:23.581 } 00:25:23.581 }, 00:25:23.581 { 00:25:23.581 "method": "bdev_nvme_set_options", 00:25:23.581 "params": { 00:25:23.581 "action_on_timeout": "none", 00:25:23.581 "allow_accel_sequence": false, 00:25:23.581 "arbitration_burst": 0, 00:25:23.581 "bdev_retry_count": 3, 00:25:23.581 "ctrlr_loss_timeout_sec": 0, 00:25:23.581 "delay_cmd_submit": true, 00:25:23.581 "dhchap_dhgroups": [ 00:25:23.581 "null", 00:25:23.581 "ffdhe2048", 00:25:23.581 "ffdhe3072", 00:25:23.581 "ffdhe4096", 00:25:23.581 "ffdhe6144", 00:25:23.581 "ffdhe8192" 00:25:23.581 ], 00:25:23.581 "dhchap_digests": [ 00:25:23.581 "sha256", 00:25:23.581 "sha384", 00:25:23.581 "sha512" 00:25:23.581 ], 00:25:23.581 "disable_auto_failback": false, 00:25:23.581 "fast_io_fail_timeout_sec": 0, 00:25:23.581 "generate_uuids": false, 00:25:23.581 "high_priority_weight": 0, 00:25:23.581 "io_path_stat": false, 00:25:23.581 "io_queue_requests": 0, 00:25:23.581 "keep_alive_timeout_ms": 10000, 00:25:23.581 "low_priority_weight": 0, 00:25:23.581 "medium_priority_weight": 0, 00:25:23.581 "nvme_adminq_poll_period_us": 10000, 00:25:23.581 "nvme_error_stat": false, 00:25:23.581 "nvme_ioq_poll_period_us": 0, 00:25:23.581 "rdma_cm_event_timeout_ms": 0, 00:25:23.581 "rdma_max_cq_size": 0, 00:25:23.581 "rdma_srq_size": 0, 00:25:23.581 "reconnect_delay_sec": 0, 00:25:23.581 "timeout_admin_us": 0, 00:25:23.581 "timeout_us": 0, 00:25:23.581 "transport_ack_timeout": 0, 00:25:23.581 "transport_retry_count": 4, 00:25:23.581 "transport_tos": 0 00:25:23.581 } 00:25:23.581 }, 00:25:23.581 { 00:25:23.581 "method": "bdev_nvme_set_hotplug", 00:25:23.581 "params": { 00:25:23.581 "enable": false, 00:25:23.581 "period_us": 100000 00:25:23.581 } 00:25:23.581 }, 00:25:23.581 { 00:25:23.581 "method": "bdev_malloc_create", 00:25:23.581 "params": { 00:25:23.581 "block_size": 4096, 00:25:23.581 "dif_is_head_of_md": false, 00:25:23.581 "dif_pi_format": 0, 00:25:23.581 "dif_type": 0, 00:25:23.581 "md_size": 0, 00:25:23.581 "name": "malloc0", 00:25:23.581 "num_blocks": 8192, 00:25:23.581 "optimal_io_boundary": 0, 00:25:23.581 "physical_block_size": 4096, 00:25:23.581 "uuid": "14b640f2-6ba0-4838-b459-85a8e43273ae" 00:25:23.581 } 00:25:23.581 }, 00:25:23.581 { 00:25:23.581 "method": "bdev_wait_for_examine" 00:25:23.581 } 00:25:23.581 ] 00:25:23.581 }, 00:25:23.581 { 00:25:23.581 "subsystem": "nbd", 00:25:23.581 "config": [] 00:25:23.581 }, 00:25:23.581 { 00:25:23.581 "subsystem": "scheduler", 00:25:23.581 "config": [ 00:25:23.581 { 00:25:23.581 "method": "framework_set_scheduler", 00:25:23.581 "params": { 00:25:23.581 "name": "static" 00:25:23.581 } 00:25:23.581 } 00:25:23.581 ] 00:25:23.581 }, 00:25:23.581 { 00:25:23.581 "subsystem": "nvmf", 00:25:23.581 "config": [ 00:25:23.581 { 00:25:23.581 "method": "nvmf_set_config", 00:25:23.581 "params": { 00:25:23.581 "admin_cmd_passthru": { 00:25:23.581 "identify_ctrlr": false 00:25:23.581 }, 00:25:23.581 "discovery_filter": "match_any" 00:25:23.581 } 00:25:23.581 }, 00:25:23.581 { 00:25:23.581 "method": "nvmf_set_max_subsystems", 00:25:23.581 "params": { 00:25:23.581 "max_subsystems": 1024 00:25:23.581 } 00:25:23.582 }, 00:25:23.582 { 00:25:23.582 "method": "nvmf_set_crdt", 00:25:23.582 "params": { 00:25:23.582 "crdt1": 0, 00:25:23.582 "crdt2": 0, 00:25:23.582 "crdt3": 0 00:25:23.582 } 00:25:23.582 }, 00:25:23.582 { 00:25:23.582 "method": "nvmf_create_transport", 00:25:23.582 "params": { 00:25:23.582 "abort_timeout_sec": 1, 00:25:23.582 "ack_timeout": 0, 00:25:23.582 "buf_cache_size": 4294967295, 00:25:23.582 "c2h_success": false, 00:25:23.582 "data_wr_pool_size": 0, 00:25:23.582 "dif_insert_or_strip": false, 00:25:23.582 "in_capsule_data_size": 4096, 00:25:23.582 "io_unit_size": 131072, 00:25:23.582 "max_aq_depth": 128, 00:25:23.582 "max_io_qpairs_per_ctrlr": 127, 00:25:23.582 "max_io_size": 131072, 00:25:23.582 "max_queue_depth": 128, 00:25:23.582 "num_shared_buffers": 511, 00:25:23.582 "sock_priority": 0, 00:25:23.582 "trtype": "TCP", 00:25:23.582 "zcopy": false 00:25:23.582 } 00:25:23.582 }, 00:25:23.582 { 00:25:23.582 "method": "nvmf_create_subsystem", 00:25:23.582 "params": { 00:25:23.582 "allow_any_host": false, 00:25:23.582 "ana_reporting": false, 00:25:23.582 "max_cntlid": 65519, 00:25:23.582 "max_namespaces": 32, 00:25:23.582 "min_cntlid": 1, 00:25:23.582 "model_number": "SPDK bdev Controller", 00:25:23.582 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:23.582 "serial_number": "00000000000000000000" 00:25:23.582 } 00:25:23.582 }, 00:25:23.582 { 00:25:23.582 "method": "nvmf_subsystem_add_host", 00:25:23.582 "params": { 00:25:23.582 "host": "nqn.2016-06.io.spdk:host1", 00:25:23.582 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:23.582 "psk": "key0" 00:25:23.582 } 00:25:23.582 }, 00:25:23.582 { 00:25:23.582 "method": "nvmf_subsystem_add_ns", 00:25:23.582 "params": { 00:25:23.582 "namespace": { 00:25:23.582 "bdev_name": "malloc0", 00:25:23.582 "nguid": "14B640F26BA04838B45985A8E43273AE", 00:25:23.582 "no_auto_visible": false, 00:25:23.582 "nsid": 1, 00:25:23.582 "uuid": "14b640f2-6ba0-4838-b459-85a8e43273ae" 00:25:23.582 }, 00:25:23.582 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:25:23.582 } 00:25:23.582 }, 00:25:23.582 { 00:25:23.582 "method": "nvmf_subsystem_add_listener", 00:25:23.582 "params": { 00:25:23.582 "listen_address": { 00:25:23.582 "adrfam": "IPv4", 00:25:23.582 "traddr": "10.0.0.2", 00:25:23.582 "trsvcid": "4420", 00:25:23.582 "trtype": "TCP" 00:25:23.582 }, 00:25:23.582 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:23.582 "secure_channel": false, 00:25:23.582 "sock_impl": "ssl" 00:25:23.582 } 00:25:23.582 } 00:25:23.582 ] 00:25:23.582 } 00:25:23.582 ] 00:25:23.582 }' 00:25:23.582 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:23.582 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:23.582 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=93093 00:25:23.582 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 93093 00:25:23.582 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 93093 ']' 00:25:23.582 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:23.582 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:25:23.582 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:23.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:23.582 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:23.582 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:23.582 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:23.841 [2024-07-22 18:32:35.631515] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:25:23.841 [2024-07-22 18:32:35.631732] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:23.841 [2024-07-22 18:32:35.814854] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:24.099 [2024-07-22 18:32:36.087633] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:24.099 [2024-07-22 18:32:36.087749] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:24.099 [2024-07-22 18:32:36.087768] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:24.099 [2024-07-22 18:32:36.087784] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:24.099 [2024-07-22 18:32:36.087796] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:24.099 [2024-07-22 18:32:36.087976] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:24.666 [2024-07-22 18:32:36.620678] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:24.666 [2024-07-22 18:32:36.663959] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:24.666 [2024-07-22 18:32:36.664319] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:24.924 18:32:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:24.924 18:32:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:25:24.924 18:32:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:24.924 18:32:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:24.924 18:32:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:24.924 18:32:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:24.924 18:32:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # bdevperf_pid=93137 00:25:24.924 18:32:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@275 -- # waitforlisten 93137 /var/tmp/bdevperf.sock 00:25:24.924 18:32:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 93137 ']' 00:25:24.924 18:32:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:24.924 18:32:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:24.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:24.924 18:32:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:24.924 18:32:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:24.924 18:32:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:24.924 18:32:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:25:24.924 18:32:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # echo '{ 00:25:24.924 "subsystems": [ 00:25:24.924 { 00:25:24.924 "subsystem": "keyring", 00:25:24.924 "config": [ 00:25:24.924 { 00:25:24.924 "method": "keyring_file_add_key", 00:25:24.924 "params": { 00:25:24.924 "name": "key0", 00:25:24.924 "path": "/tmp/tmp.Nr9H45DkV5" 00:25:24.924 } 00:25:24.924 } 00:25:24.924 ] 00:25:24.924 }, 00:25:24.924 { 00:25:24.924 "subsystem": "iobuf", 00:25:24.924 "config": [ 00:25:24.924 { 00:25:24.924 "method": "iobuf_set_options", 00:25:24.924 "params": { 00:25:24.924 "large_bufsize": 135168, 00:25:24.924 "large_pool_count": 1024, 00:25:24.924 "small_bufsize": 8192, 00:25:24.924 "small_pool_count": 8192 00:25:24.924 } 00:25:24.924 } 00:25:24.924 ] 00:25:24.924 }, 00:25:24.924 { 00:25:24.924 "subsystem": "sock", 00:25:24.924 "config": [ 00:25:24.924 { 00:25:24.924 "method": "sock_set_default_impl", 00:25:24.924 "params": { 00:25:24.924 "impl_name": "posix" 00:25:24.924 } 00:25:24.924 }, 00:25:24.924 { 00:25:24.924 "method": "sock_impl_set_options", 00:25:24.924 "params": { 00:25:24.924 "enable_ktls": false, 00:25:24.924 "enable_placement_id": 0, 00:25:24.924 "enable_quickack": false, 00:25:24.924 "enable_recv_pipe": true, 00:25:24.924 "enable_zerocopy_send_client": false, 00:25:24.924 "enable_zerocopy_send_server": true, 00:25:24.925 "impl_name": "ssl", 00:25:24.925 "recv_buf_size": 4096, 00:25:24.925 "send_buf_size": 4096, 00:25:24.925 "tls_version": 0, 00:25:24.925 "zerocopy_threshold": 0 00:25:24.925 } 00:25:24.925 }, 00:25:24.925 { 00:25:24.925 "method": "sock_impl_set_options", 00:25:24.925 "params": { 00:25:24.925 "enable_ktls": false, 00:25:24.925 "enable_placement_id": 0, 00:25:24.925 "enable_quickack": false, 00:25:24.925 "enable_recv_pipe": true, 00:25:24.925 "enable_zerocopy_send_client": false, 00:25:24.925 "enable_zerocopy_send_server": true, 00:25:24.925 "impl_name": "posix", 00:25:24.925 "recv_buf_size": 2097152, 00:25:24.925 "send_buf_size": 2097152, 00:25:24.925 "tls_version": 0, 00:25:24.925 "zerocopy_threshold": 0 00:25:24.925 } 00:25:24.925 } 00:25:24.925 ] 00:25:24.925 }, 00:25:24.925 { 00:25:24.925 "subsystem": "vmd", 00:25:24.925 "config": [] 00:25:24.925 }, 00:25:24.925 { 00:25:24.925 "subsystem": "accel", 00:25:24.925 "config": [ 00:25:24.925 { 00:25:24.925 "method": "accel_set_options", 00:25:24.925 "params": { 00:25:24.925 "buf_count": 2048, 00:25:24.925 "large_cache_size": 16, 00:25:24.925 "sequence_count": 2048, 00:25:24.925 "small_cache_size": 128, 00:25:24.925 "task_count": 2048 00:25:24.925 } 00:25:24.925 } 00:25:24.925 ] 00:25:24.925 }, 00:25:24.925 { 00:25:24.925 "subsystem": "bdev", 00:25:24.925 "config": [ 00:25:24.925 { 00:25:24.925 "method": "bdev_set_options", 00:25:24.925 "params": { 00:25:24.925 "bdev_auto_examine": true, 00:25:24.925 "bdev_io_cache_size": 256, 00:25:24.925 "bdev_io_pool_size": 65535, 00:25:24.925 "iobuf_large_cache_size": 16, 00:25:24.925 "iobuf_small_cache_size": 128 00:25:24.925 } 00:25:24.925 }, 00:25:24.925 { 00:25:24.925 "method": "bdev_raid_set_options", 00:25:24.925 "params": { 00:25:24.925 "process_max_bandwidth_mb_sec": 0, 00:25:24.925 "process_window_size_kb": 1024 00:25:24.925 } 00:25:24.925 }, 00:25:24.925 { 00:25:24.925 "method": "bdev_iscsi_set_options", 00:25:24.925 "params": { 00:25:24.925 "timeout_sec": 30 00:25:24.925 } 00:25:24.925 }, 00:25:24.925 { 00:25:24.925 "method": "bdev_nvme_set_options", 00:25:24.925 "params": { 00:25:24.925 "action_on_timeout": "none", 00:25:24.925 "allow_accel_sequence": false, 00:25:24.925 "arbitration_burst": 0, 00:25:24.925 "bdev_retry_count": 3, 00:25:24.925 "ctrlr_loss_timeout_sec": 0, 00:25:24.925 "delay_cmd_submit": true, 00:25:24.925 "dhchap_dhgroups": [ 00:25:24.925 "null", 00:25:24.925 "ffdhe2048", 00:25:24.925 "ffdhe3072", 00:25:24.925 "ffdhe4096", 00:25:24.925 "ffdhe6144", 00:25:24.925 "ffdhe8192" 00:25:24.925 ], 00:25:24.925 "dhchap_digests": [ 00:25:24.925 "sha256", 00:25:24.925 "sha384", 00:25:24.925 "sha512" 00:25:24.925 ], 00:25:24.925 "disable_auto_failback": false, 00:25:24.925 "fast_io_fail_timeout_sec": 0, 00:25:24.925 "generate_uuids": false, 00:25:24.925 "high_priority_weight": 0, 00:25:24.925 "io_path_stat": false, 00:25:24.925 "io_queue_requests": 512, 00:25:24.925 "keep_alive_timeout_ms": 10000, 00:25:24.925 "low_priority_weight": 0, 00:25:24.925 "medium_priority_weight": 0, 00:25:24.925 "nvme_adminq_poll_period_us": 10000, 00:25:24.925 "nvme_error_stat": false, 00:25:24.925 "nvme_ioq_poll_period_us": 0, 00:25:24.925 "rdma_cm_event_timeout_ms": 0, 00:25:24.925 "rdma_max_cq_size": 0, 00:25:24.925 "rdma_srq_size": 0, 00:25:24.925 "reconnect_delay_sec": 0, 00:25:24.925 "timeout_admin_us": 0, 00:25:24.925 "timeout_us": 0, 00:25:24.925 "transport_ack_timeout": 0, 00:25:24.925 "transport_retry_count": 4, 00:25:24.925 "transport_tos": 0 00:25:24.925 } 00:25:24.925 }, 00:25:24.925 { 00:25:24.925 "method": "bdev_nvme_attach_controller", 00:25:24.925 "params": { 00:25:24.925 "adrfam": "IPv4", 00:25:24.925 "ctrlr_loss_timeout_sec": 0, 00:25:24.925 "ddgst": false, 00:25:24.925 "fast_io_fail_timeout_sec": 0, 00:25:24.925 "hdgst": false, 00:25:24.925 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:24.925 "name": "nvme0", 00:25:24.925 "prchk_guard": false, 00:25:24.925 "prchk_reftag": false, 00:25:24.925 "psk": "key0", 00:25:24.925 "reconnect_delay_sec": 0, 00:25:24.925 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:24.925 "traddr": "10.0.0.2", 00:25:24.925 "trsvcid": "4420", 00:25:24.925 "trtype": "TCP" 00:25:24.925 } 00:25:24.925 }, 00:25:24.925 { 00:25:24.925 "method": "bdev_nvme_set_hotplug", 00:25:24.925 "params": { 00:25:24.925 "enable": false, 00:25:24.925 "period_us": 100000 00:25:24.925 } 00:25:24.925 }, 00:25:24.925 { 00:25:24.925 "method": "bdev_enable_histogram", 00:25:24.925 "params": { 00:25:24.925 "enable": true, 00:25:24.925 "name": "nvme0n1" 00:25:24.925 } 00:25:24.925 }, 00:25:24.925 { 00:25:24.925 "method": "bdev_wait_for_examine" 00:25:24.925 } 00:25:24.925 ] 00:25:24.925 }, 00:25:24.925 { 00:25:24.925 "subsystem": "nbd", 00:25:24.925 "config": [] 00:25:24.925 } 00:25:24.925 ] 00:25:24.925 }' 00:25:24.925 [2024-07-22 18:32:36.846635] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:25:24.925 [2024-07-22 18:32:36.846842] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93137 ] 00:25:25.183 [2024-07-22 18:32:37.031754] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:25.441 [2024-07-22 18:32:37.336375] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:26.007 [2024-07-22 18:32:37.765465] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:26.007 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:26.007 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:25:26.007 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:26.007 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # jq -r '.[].name' 00:25:26.265 18:32:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:26.265 18:32:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@278 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:26.523 Running I/O for 1 seconds... 00:25:27.458 00:25:27.458 Latency(us) 00:25:27.458 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:27.458 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:25:27.458 Verification LBA range: start 0x0 length 0x2000 00:25:27.458 nvme0n1 : 1.03 2647.88 10.34 0.00 0.00 47400.93 7923.90 28359.21 00:25:27.458 =================================================================================================================== 00:25:27.458 Total : 2647.88 10.34 0.00 0.00 47400.93 7923.90 28359.21 00:25:27.458 0 00:25:27.458 18:32:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # trap - SIGINT SIGTERM EXIT 00:25:27.458 18:32:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@281 -- # cleanup 00:25:27.458 18:32:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:25:27.458 18:32:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:25:27.458 18:32:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:25:27.458 18:32:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:25:27.458 18:32:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:25:27.458 18:32:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:25:27.458 18:32:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:25:27.458 18:32:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:25:27.458 18:32:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:25:27.458 nvmf_trace.0 00:25:27.717 18:32:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:25:27.717 18:32:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 93137 00:25:27.717 18:32:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 93137 ']' 00:25:27.717 18:32:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 93137 00:25:27.717 18:32:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:25:27.717 18:32:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:27.717 18:32:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93137 00:25:27.717 18:32:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:27.717 18:32:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:27.717 18:32:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93137' 00:25:27.717 killing process with pid 93137 00:25:27.717 18:32:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 93137 00:25:27.717 Received shutdown signal, test time was about 1.000000 seconds 00:25:27.717 00:25:27.717 Latency(us) 00:25:27.717 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:27.717 =================================================================================================================== 00:25:27.717 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:27.717 18:32:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 93137 00:25:29.096 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:25:29.096 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:29.096 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:25:29.096 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:29.096 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:25:29.096 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:29.096 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:29.096 rmmod nvme_tcp 00:25:29.096 rmmod nvme_fabrics 00:25:29.096 rmmod nvme_keyring 00:25:29.096 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:29.096 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:25:29.096 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:25:29.096 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 93093 ']' 00:25:29.096 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 93093 00:25:29.096 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 93093 ']' 00:25:29.096 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 93093 00:25:29.096 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:25:29.096 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:29.096 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93093 00:25:29.096 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:29.096 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:29.096 killing process with pid 93093 00:25:29.096 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93093' 00:25:29.096 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 93093 00:25:29.096 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 93093 00:25:30.482 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:30.482 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:30.482 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:30.482 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:30.482 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:30.482 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:30.482 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:30.482 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:30.482 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:25:30.482 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.yhXg0mJ7O5 /tmp/tmp.7KQUfF6asL /tmp/tmp.Nr9H45DkV5 00:25:30.482 00:25:30.482 real 1m51.564s 00:25:30.482 user 2m57.689s 00:25:30.482 sys 0m29.430s 00:25:30.482 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:30.482 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:30.482 ************************************ 00:25:30.482 END TEST nvmf_tls 00:25:30.482 ************************************ 00:25:30.482 18:32:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:25:30.482 18:32:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:25:30.482 18:32:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:30.482 18:32:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:30.482 18:32:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:30.482 ************************************ 00:25:30.482 START TEST nvmf_fips 00:25:30.482 ************************************ 00:25:30.482 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:25:30.482 * Looking for test storage... 00:25:30.482 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:25:30.482 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:30.482 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:25:30.482 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:30.482 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:30.482 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:30.482 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:30.482 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:30.482 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:30.482 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:30.482 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:30.482 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:30.482 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:30.741 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:25:30.741 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=0b8484e2-e129-4a11-8748-0b3c728771da 00:25:30.741 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:30.741 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:30.741 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:30.741 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:30.741 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:30.741 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:30.741 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:30.741 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:30.741 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:30.741 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:30.741 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:30.741 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:25:30.741 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:30.741 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:25:30.742 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:30.742 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:30.742 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:30.742 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:30.742 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:30.742 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:30.742 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:30.742 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:30.742 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:30.742 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:25:30.742 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:25:30.742 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:25:30.742 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:25:30.742 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:25:30.742 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:25:30.742 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:25:30.742 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:25:30.742 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:25:30.742 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:25:30.742 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:25:30.742 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:25:30.742 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:25:30.742 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:25:30.742 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:25:30.742 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:25:30.742 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:25:30.742 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:25:30.742 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:25:30.742 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:30.742 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:25:30.742 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:25:30.742 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:25:30.742 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:25:30.742 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:25:30.742 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:25:30.742 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:25:30.742 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:25:30.742 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:25:30.742 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:25:30.742 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:25:30.742 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:25:30.742 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:25:30.742 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:30.742 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:25:30.742 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:25:30.742 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:25:30.742 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:25:30.742 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:25:30.742 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:25:30.742 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:25:30.742 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:25:30.742 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:25:30.742 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:25:30.742 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:25:30.742 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:25:30.742 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:25:30.742 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:30.742 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:25:30.742 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:25:30.742 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:25:30.742 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:25:30.742 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:25:30.742 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:25:30.742 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:25:30.742 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:25:30.742 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:25:30.742 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:25:30.742 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:25:30.742 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:25:30.742 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:25:30.742 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:25:30.742 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:25:30.742 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:25:30.742 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:25:30.742 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:25:30.742 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:25:30.742 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:25:30.742 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@37 -- # cat 00:25:30.742 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:25:30.742 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:25:30.742 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:25:30.742 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:25:30.742 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:25:30.742 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:25:30.742 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:25:30.742 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:25:30.742 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:25:30.742 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:25:30.742 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:25:30.742 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:25:30.742 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:25:30.742 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # : 00:25:30.742 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:25:30.742 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:30.742 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:25:30.742 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:30.742 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:25:30.742 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:30.742 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:25:30.742 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:25:30.742 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:25:30.742 Error setting digest 00:25:30.742 0062118C517F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:25:30.742 0062118C517F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:25:30.742 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:25:30.742 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:30.742 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:30.742 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:30.742 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:25:30.742 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:30.742 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:30.742 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:30.742 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:30.742 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:30.742 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:30.743 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:30.743 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:30.743 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:25:30.743 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:25:30.743 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:25:30.743 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:25:30.743 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:25:30.743 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # nvmf_veth_init 00:25:30.743 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:30.743 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:30.743 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:25:30.743 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:25:30.743 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:30.743 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:30.743 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:30.743 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:30.743 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:30.743 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:30.743 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:30.743 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:30.743 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:25:30.743 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:25:30.743 Cannot find device "nvmf_tgt_br" 00:25:30.743 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@155 -- # true 00:25:30.743 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:25:30.743 Cannot find device "nvmf_tgt_br2" 00:25:30.743 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # true 00:25:30.743 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:25:30.743 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:25:31.001 Cannot find device "nvmf_tgt_br" 00:25:31.001 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@158 -- # true 00:25:31.001 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:25:31.001 Cannot find device "nvmf_tgt_br2" 00:25:31.001 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # true 00:25:31.001 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:25:31.001 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:25:31.001 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:31.001 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:31.001 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # true 00:25:31.001 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:31.001 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:31.001 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # true 00:25:31.001 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:25:31.001 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:31.001 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:31.001 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:31.001 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:31.001 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:31.001 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:31.001 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:31.001 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:25:31.001 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:25:31.001 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:25:31.001 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:25:31.001 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:25:31.001 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:31.001 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:31.001 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:31.001 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:25:31.002 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:25:31.002 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:25:31.002 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:31.002 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:31.002 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:31.002 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:31.002 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:25:31.002 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:31.002 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:25:31.002 00:25:31.002 --- 10.0.0.2 ping statistics --- 00:25:31.002 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:31.002 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:25:31.002 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:25:31.002 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:31.002 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:25:31.002 00:25:31.002 --- 10.0.0.3 ping statistics --- 00:25:31.002 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:31.002 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:25:31.002 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:31.002 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:31.002 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:25:31.002 00:25:31.002 --- 10.0.0.1 ping statistics --- 00:25:31.002 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:31.002 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:25:31.002 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:31.002 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@433 -- # return 0 00:25:31.002 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:31.002 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:31.002 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:31.002 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:31.002 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:31.002 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:31.002 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:31.260 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:25:31.260 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:31.260 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:31.260 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:31.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:31.260 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=93454 00:25:31.260 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:31.260 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 93454 00:25:31.260 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 93454 ']' 00:25:31.260 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:31.260 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:31.260 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:31.260 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:31.260 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:31.260 [2024-07-22 18:32:43.186352] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:25:31.260 [2024-07-22 18:32:43.186541] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:31.518 [2024-07-22 18:32:43.358320] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:31.806 [2024-07-22 18:32:43.656109] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:31.806 [2024-07-22 18:32:43.656207] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:31.806 [2024-07-22 18:32:43.656225] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:31.806 [2024-07-22 18:32:43.656254] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:31.806 [2024-07-22 18:32:43.656265] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:31.806 [2024-07-22 18:32:43.656337] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:32.066 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:32.066 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:25:32.066 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:32.066 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:32.066 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:32.325 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:32.325 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:25:32.325 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:25:32.325 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:25:32.325 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:25:32.325 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:25:32.325 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:25:32.325 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:25:32.325 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:32.584 [2024-07-22 18:32:44.378111] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:32.584 [2024-07-22 18:32:44.393996] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:32.584 [2024-07-22 18:32:44.394275] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:32.584 [2024-07-22 18:32:44.463996] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:25:32.584 malloc0 00:25:32.584 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:32.584 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=93507 00:25:32.584 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 93507 /var/tmp/bdevperf.sock 00:25:32.584 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:32.584 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 93507 ']' 00:25:32.584 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:32.584 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:32.584 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:32.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:32.584 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:32.584 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:32.844 [2024-07-22 18:32:44.659661] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:25:32.844 [2024-07-22 18:32:44.660369] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93507 ] 00:25:32.844 [2024-07-22 18:32:44.830456] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:33.102 [2024-07-22 18:32:45.109644] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:33.668 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:33.668 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:25:33.668 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:25:33.926 [2024-07-22 18:32:45.707699] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:33.926 [2024-07-22 18:32:45.707905] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:25:33.926 TLSTESTn1 00:25:33.926 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@154 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:33.926 Running I/O for 10 seconds... 00:25:46.126 00:25:46.126 Latency(us) 00:25:46.126 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:46.126 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:46.126 Verification LBA range: start 0x0 length 0x2000 00:25:46.126 TLSTESTn1 : 10.02 2798.52 10.93 0.00 0.00 45643.18 8579.26 32648.84 00:25:46.126 =================================================================================================================== 00:25:46.126 Total : 2798.52 10.93 0.00 0.00 45643.18 8579.26 32648.84 00:25:46.126 0 00:25:46.126 18:32:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:25:46.126 18:32:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:25:46.126 18:32:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:25:46.126 18:32:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:25:46.126 18:32:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:25:46.126 18:32:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:25:46.126 18:32:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:25:46.126 18:32:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:25:46.126 18:32:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:25:46.126 18:32:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:25:46.126 nvmf_trace.0 00:25:46.126 18:32:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:25:46.126 18:32:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 93507 00:25:46.126 18:32:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 93507 ']' 00:25:46.126 18:32:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 93507 00:25:46.126 18:32:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:25:46.126 18:32:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:46.126 18:32:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93507 00:25:46.126 killing process with pid 93507 00:25:46.126 Received shutdown signal, test time was about 10.000000 seconds 00:25:46.126 00:25:46.126 Latency(us) 00:25:46.126 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:46.126 =================================================================================================================== 00:25:46.126 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:46.126 18:32:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:25:46.126 18:32:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:25:46.126 18:32:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93507' 00:25:46.126 18:32:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@967 -- # kill 93507 00:25:46.126 [2024-07-22 18:32:56.097633] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:25:46.126 18:32:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # wait 93507 00:25:46.126 18:32:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:25:46.126 18:32:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:46.126 18:32:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:25:46.126 18:32:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:46.126 18:32:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:25:46.126 18:32:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:46.126 18:32:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:46.126 rmmod nvme_tcp 00:25:46.126 rmmod nvme_fabrics 00:25:46.126 rmmod nvme_keyring 00:25:46.126 18:32:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:46.126 18:32:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:25:46.126 18:32:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:25:46.126 18:32:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 93454 ']' 00:25:46.126 18:32:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 93454 00:25:46.126 18:32:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 93454 ']' 00:25:46.126 18:32:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 93454 00:25:46.126 18:32:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:25:46.126 18:32:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:46.126 18:32:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93454 00:25:46.126 killing process with pid 93454 00:25:46.126 18:32:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:46.126 18:32:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:46.126 18:32:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93454' 00:25:46.126 18:32:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@967 -- # kill 93454 00:25:46.126 [2024-07-22 18:32:57.663127] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:25:46.126 18:32:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # wait 93454 00:25:47.497 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:47.497 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:47.497 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:47.497 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:47.497 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:47.497 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:47.497 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:47.497 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:47.497 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:25:47.497 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:25:47.497 00:25:47.497 real 0m16.737s 00:25:47.497 user 0m23.686s 00:25:47.497 sys 0m5.527s 00:25:47.497 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:47.497 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:47.497 ************************************ 00:25:47.497 END TEST nvmf_fips 00:25:47.497 ************************************ 00:25:47.497 18:32:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:25:47.497 18:32:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@45 -- # '[' 1 -eq 1 ']' 00:25:47.497 18:32:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@46 -- # run_test nvmf_fuzz /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:25:47.498 18:32:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:47.498 18:32:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:47.498 18:32:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:47.498 ************************************ 00:25:47.498 START TEST nvmf_fuzz 00:25:47.498 ************************************ 00:25:47.498 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:25:47.498 * Looking for test storage... 00:25:47.498 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:25:47.498 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:47.498 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:25:47.498 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:47.498 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:47.498 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:47.498 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:47.498 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:47.498 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:47.498 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:47.498 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:47.498 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:47.498 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:47.498 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:25:47.498 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=0b8484e2-e129-4a11-8748-0b3c728771da 00:25:47.498 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:47.498 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:47.498 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:47.498 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:47.498 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:47.498 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:47.498 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:47.498 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:47.498 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:47.498 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:47.498 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:47.498 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:25:47.498 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:47.498 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@47 -- # : 0 00:25:47.498 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:47.498 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:47.498 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:47.498 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:47.498 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:47.498 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:47.498 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:47.498 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:47.498 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:25:47.498 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:47.498 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:47.498 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:47.498 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:47.498 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:47.498 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:47.498 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:47.498 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:47.498 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:25:47.498 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:25:47.498 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:25:47.498 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:25:47.498 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:25:47.498 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@432 -- # nvmf_veth_init 00:25:47.498 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:47.498 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:47.498 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:25:47.498 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:25:47.498 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:47.498 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:47.498 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:47.498 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:47.498 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:47.498 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:47.498 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:47.498 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:47.498 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:25:47.498 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:25:47.498 Cannot find device "nvmf_tgt_br" 00:25:47.498 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@155 -- # true 00:25:47.498 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:25:47.498 Cannot find device "nvmf_tgt_br2" 00:25:47.498 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@156 -- # true 00:25:47.498 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:25:47.498 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:25:47.498 Cannot find device "nvmf_tgt_br" 00:25:47.498 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@158 -- # true 00:25:47.498 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:25:47.498 Cannot find device "nvmf_tgt_br2" 00:25:47.498 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@159 -- # true 00:25:47.498 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:25:47.498 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:25:47.498 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:47.498 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:47.498 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@162 -- # true 00:25:47.498 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:47.498 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:47.498 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@163 -- # true 00:25:47.498 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:25:47.498 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:47.498 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:47.498 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:47.498 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:47.498 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:47.499 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:47.756 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:47.756 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:25:47.756 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:25:47.756 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:25:47.756 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:25:47.756 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:25:47.756 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:47.756 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:47.756 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:47.756 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:25:47.756 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:25:47.757 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:25:47.757 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:47.757 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:47.757 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:47.757 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:47.757 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:25:47.757 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:47.757 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.113 ms 00:25:47.757 00:25:47.757 --- 10.0.0.2 ping statistics --- 00:25:47.757 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:47.757 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:25:47.757 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:25:47.757 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:47.757 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:25:47.757 00:25:47.757 --- 10.0.0.3 ping statistics --- 00:25:47.757 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:47.757 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:25:47.757 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:47.757 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:47.757 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:25:47.757 00:25:47.757 --- 10.0.0.1 ping statistics --- 00:25:47.757 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:47.757 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:25:47.757 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:47.757 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@433 -- # return 0 00:25:47.757 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:47.757 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:47.757 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:47.757 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:47.757 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:47.757 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:47.757 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:47.757 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=93878 00:25:47.757 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:25:47.757 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:25:47.757 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 93878 00:25:47.757 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@829 -- # '[' -z 93878 ']' 00:25:47.757 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:47.757 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:47.757 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:47.757 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:47.757 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:47.757 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:49.131 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:49.131 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@862 -- # return 0 00:25:49.131 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:49.131 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.131 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:49.131 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.131 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:25:49.131 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.132 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:49.132 Malloc0 00:25:49.132 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.132 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:49.132 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.132 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:49.132 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.132 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:49.132 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.132 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:49.132 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.132 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:49.132 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.132 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:49.132 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.132 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:25:49.132 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:25:50.065 Shutting down the fuzz application 00:25:50.065 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:25:50.998 Shutting down the fuzz application 00:25:50.998 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:50.998 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:50.998 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:50.998 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:50.998 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:25:50.998 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:25:50.998 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:50.998 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # sync 00:25:50.998 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:50.998 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@120 -- # set +e 00:25:50.998 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:50.998 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:50.998 rmmod nvme_tcp 00:25:50.998 rmmod nvme_fabrics 00:25:50.998 rmmod nvme_keyring 00:25:50.998 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:50.998 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@124 -- # set -e 00:25:50.998 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@125 -- # return 0 00:25:50.998 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@489 -- # '[' -n 93878 ']' 00:25:50.998 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@490 -- # killprocess 93878 00:25:50.999 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@948 -- # '[' -z 93878 ']' 00:25:50.999 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@952 -- # kill -0 93878 00:25:50.999 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@953 -- # uname 00:25:50.999 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:50.999 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93878 00:25:51.257 killing process with pid 93878 00:25:51.257 18:33:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:51.257 18:33:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:51.257 18:33:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93878' 00:25:51.257 18:33:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@967 -- # kill 93878 00:25:51.257 18:33:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@972 -- # wait 93878 00:25:52.635 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:52.635 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:52.635 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:52.635 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:52.635 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:52.635 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:52.635 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:52.635 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:52.635 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:25:52.635 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs1.txt /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs2.txt 00:25:52.635 00:25:52.635 real 0m5.333s 00:25:52.635 user 0m6.426s 00:25:52.635 sys 0m1.025s 00:25:52.635 ************************************ 00:25:52.636 END TEST nvmf_fuzz 00:25:52.636 ************************************ 00:25:52.636 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:52.636 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:52.636 18:33:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:25:52.636 18:33:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # run_test nvmf_multiconnection /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:25:52.636 18:33:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:52.636 18:33:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:52.636 18:33:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:52.636 ************************************ 00:25:52.636 START TEST nvmf_multiconnection 00:25:52.636 ************************************ 00:25:52.636 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:25:52.895 * Looking for test storage... 00:25:52.895 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:25:52.895 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:52.895 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:25:52.895 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:52.895 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:52.895 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:52.895 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:52.895 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:52.895 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:52.895 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:52.895 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:52.895 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:52.895 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:52.895 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:25:52.895 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=0b8484e2-e129-4a11-8748-0b3c728771da 00:25:52.895 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:52.895 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:52.895 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:52.895 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:52.895 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:52.895 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:52.895 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:52.895 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:52.895 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:52.895 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:52.895 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:52.896 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:25:52.896 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:52.896 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@47 -- # : 0 00:25:52.896 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:52.896 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:52.896 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:52.896 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:52.896 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:52.896 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:52.896 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:52.896 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:52.896 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:52.896 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:52.896 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:25:52.896 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:25:52.896 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:52.896 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:52.896 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:52.896 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:52.896 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:52.896 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:52.896 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:52.896 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:52.896 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:25:52.896 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:25:52.896 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:25:52.896 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:25:52.896 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:25:52.896 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@432 -- # nvmf_veth_init 00:25:52.896 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:52.896 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:52.896 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:25:52.896 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:25:52.896 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:52.896 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:52.896 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:52.896 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:52.896 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:52.896 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:52.896 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:52.896 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:52.896 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:25:52.896 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:25:52.896 Cannot find device "nvmf_tgt_br" 00:25:52.896 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@155 -- # true 00:25:52.896 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:25:52.896 Cannot find device "nvmf_tgt_br2" 00:25:52.896 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@156 -- # true 00:25:52.896 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:25:52.896 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:25:52.896 Cannot find device "nvmf_tgt_br" 00:25:52.896 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@158 -- # true 00:25:52.896 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:25:52.896 Cannot find device "nvmf_tgt_br2" 00:25:52.896 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@159 -- # true 00:25:52.896 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:25:52.896 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:25:52.896 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:52.896 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:52.896 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@162 -- # true 00:25:52.896 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:52.896 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:52.896 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@163 -- # true 00:25:52.896 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:25:52.896 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:52.896 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:52.896 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:52.896 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:52.896 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:52.896 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:53.156 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:53.156 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:25:53.156 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:25:53.156 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:25:53.156 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:25:53.156 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:25:53.156 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:53.156 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:53.156 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:53.156 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:25:53.156 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:25:53.156 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:25:53.156 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:53.156 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:53.156 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:53.156 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:53.156 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:25:53.156 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:53.156 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.093 ms 00:25:53.156 00:25:53.156 --- 10.0.0.2 ping statistics --- 00:25:53.156 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:53.156 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:25:53.156 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:25:53.156 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:53.156 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:25:53.156 00:25:53.156 --- 10.0.0.3 ping statistics --- 00:25:53.156 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:53.156 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:25:53.156 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:53.156 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:53.156 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:25:53.156 00:25:53.156 --- 10.0.0.1 ping statistics --- 00:25:53.156 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:53.156 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:25:53.156 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:53.156 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@433 -- # return 0 00:25:53.156 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:53.156 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:53.156 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:53.156 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:53.156 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:53.156 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:53.156 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:53.156 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:25:53.156 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:53.156 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:53.156 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:53.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:53.156 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@481 -- # nvmfpid=94142 00:25:53.156 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@482 -- # waitforlisten 94142 00:25:53.156 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:53.156 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@829 -- # '[' -z 94142 ']' 00:25:53.156 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:53.156 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:53.156 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:53.156 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:53.156 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:53.416 [2024-07-22 18:33:05.185694] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:25:53.416 [2024-07-22 18:33:05.185934] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:53.416 [2024-07-22 18:33:05.373823] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:53.675 [2024-07-22 18:33:05.677616] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:53.675 [2024-07-22 18:33:05.678200] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:53.675 [2024-07-22 18:33:05.678389] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:53.675 [2024-07-22 18:33:05.678622] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:53.675 [2024-07-22 18:33:05.678753] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:53.675 [2024-07-22 18:33:05.679141] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:53.675 [2024-07-22 18:33:05.679353] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:53.675 [2024-07-22 18:33:05.679341] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:53.675 [2024-07-22 18:33:05.679253] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:54.241 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:54.241 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@862 -- # return 0 00:25:54.241 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:54.241 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:54.241 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:54.241 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:54.241 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:54.241 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.241 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:54.241 [2024-07-22 18:33:06.188472] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:54.241 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.241 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:25:54.241 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:54.241 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:25:54.241 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.241 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:54.500 Malloc1 00:25:54.500 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.500 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:25:54.500 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.500 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:54.500 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.500 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:54.501 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.501 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:54.501 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.501 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:54.501 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.501 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:54.501 [2024-07-22 18:33:06.317813] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:54.501 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.501 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:54.501 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:25:54.501 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.501 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:54.501 Malloc2 00:25:54.501 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.501 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:25:54.501 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.501 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:54.501 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.501 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:25:54.501 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.501 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:54.501 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.501 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:25:54.501 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.501 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:54.501 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.501 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:54.501 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:25:54.501 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.501 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:54.760 Malloc3 00:25:54.760 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.760 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:25:54.760 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.760 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:54.760 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.760 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:25:54.760 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.760 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:54.760 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.760 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:25:54.760 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.760 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:54.760 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.760 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:54.760 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:25:54.760 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.760 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:54.760 Malloc4 00:25:54.760 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.760 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:25:54.760 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.760 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:54.760 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.760 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:25:54.760 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.760 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:54.760 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.760 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:25:54.760 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.760 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:54.760 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.760 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:54.760 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:25:54.760 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.760 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:54.760 Malloc5 00:25:54.760 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.760 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:25:54.760 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.760 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:54.760 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.760 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:25:54.760 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.760 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:55.019 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.019 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:25:55.019 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.019 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:55.019 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.019 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:55.019 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:25:55.019 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.019 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:55.019 Malloc6 00:25:55.019 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.019 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:25:55.019 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.019 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:55.019 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.019 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:25:55.019 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.019 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:55.019 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.019 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:25:55.019 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.019 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:55.019 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.019 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:55.019 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:25:55.019 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.019 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:55.019 Malloc7 00:25:55.019 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.019 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:25:55.019 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.019 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:55.019 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.019 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:25:55.019 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.019 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:55.019 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.019 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:25:55.019 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.019 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:55.019 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.019 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:55.019 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:25:55.019 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.019 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:55.277 Malloc8 00:25:55.277 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.277 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:25:55.277 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.277 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:55.277 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.278 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:25:55.278 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.278 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:55.278 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.278 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:25:55.278 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.278 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:55.278 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.278 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:55.278 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:25:55.278 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.278 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:55.278 Malloc9 00:25:55.278 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.278 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:25:55.278 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.278 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:55.278 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.278 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:25:55.278 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.278 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:55.278 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.278 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:25:55.278 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.278 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:55.278 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.278 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:55.278 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:25:55.278 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.278 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:55.537 Malloc10 00:25:55.537 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.537 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:25:55.537 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.537 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:55.537 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.537 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:25:55.537 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.537 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:55.537 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.537 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:25:55.537 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.537 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:55.537 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.537 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:55.537 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:25:55.537 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.537 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:55.537 Malloc11 00:25:55.537 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.537 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:25:55.537 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.537 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:55.537 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.537 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:25:55.537 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.537 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:55.537 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.537 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:25:55.537 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.537 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:55.537 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.537 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:25:55.537 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:55.537 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --hostid=0b8484e2-e129-4a11-8748-0b3c728771da -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:25:55.796 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:25:55.796 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:55.796 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:55.796 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:55.796 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:57.702 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:57.702 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:57.702 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK1 00:25:57.702 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:57.702 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:57.702 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:57.702 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:57.702 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --hostid=0b8484e2-e129-4a11-8748-0b3c728771da -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:25:57.962 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:25:57.962 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:57.962 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:57.962 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:57.962 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:00.497 18:33:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:00.497 18:33:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:00.497 18:33:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK2 00:26:00.497 18:33:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:00.497 18:33:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:00.497 18:33:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:00.497 18:33:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:00.497 18:33:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --hostid=0b8484e2-e129-4a11-8748-0b3c728771da -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:26:00.497 18:33:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:26:00.497 18:33:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:00.497 18:33:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:00.497 18:33:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:00.497 18:33:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:02.403 18:33:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:02.403 18:33:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:02.403 18:33:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK3 00:26:02.403 18:33:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:02.404 18:33:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:02.404 18:33:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:02.404 18:33:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:02.404 18:33:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --hostid=0b8484e2-e129-4a11-8748-0b3c728771da -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:26:02.404 18:33:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:26:02.404 18:33:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:02.404 18:33:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:02.404 18:33:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:02.404 18:33:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:04.306 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:04.306 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:04.306 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK4 00:26:04.307 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:04.307 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:04.307 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:04.307 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:04.307 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --hostid=0b8484e2-e129-4a11-8748-0b3c728771da -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:26:04.565 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:26:04.566 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:04.566 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:04.566 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:04.566 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:07.098 18:33:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:07.098 18:33:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:07.098 18:33:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK5 00:26:07.098 18:33:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:07.098 18:33:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:07.098 18:33:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:07.098 18:33:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:07.098 18:33:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --hostid=0b8484e2-e129-4a11-8748-0b3c728771da -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:26:07.098 18:33:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:26:07.098 18:33:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:07.098 18:33:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:07.098 18:33:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:07.098 18:33:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:09.002 18:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:09.002 18:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:09.002 18:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK6 00:26:09.002 18:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:09.002 18:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:09.002 18:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:09.002 18:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:09.002 18:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --hostid=0b8484e2-e129-4a11-8748-0b3c728771da -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:26:09.002 18:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:26:09.002 18:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:09.002 18:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:09.002 18:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:09.002 18:33:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:10.903 18:33:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:10.903 18:33:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:10.903 18:33:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK7 00:26:10.903 18:33:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:10.903 18:33:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:10.903 18:33:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:10.903 18:33:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:10.903 18:33:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --hostid=0b8484e2-e129-4a11-8748-0b3c728771da -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:26:11.161 18:33:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:26:11.161 18:33:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:11.161 18:33:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:11.161 18:33:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:11.161 18:33:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:13.696 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:13.696 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:13.696 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK8 00:26:13.696 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:13.696 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:13.696 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:13.696 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:13.696 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --hostid=0b8484e2-e129-4a11-8748-0b3c728771da -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:26:13.696 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:26:13.696 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:13.696 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:13.696 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:13.696 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:15.599 18:33:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:15.599 18:33:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:15.599 18:33:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK9 00:26:15.599 18:33:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:15.599 18:33:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:15.599 18:33:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:15.599 18:33:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:15.599 18:33:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --hostid=0b8484e2-e129-4a11-8748-0b3c728771da -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:26:15.599 18:33:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:26:15.600 18:33:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:15.600 18:33:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:15.600 18:33:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:15.600 18:33:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:17.501 18:33:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:17.501 18:33:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK10 00:26:17.501 18:33:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:17.759 18:33:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:17.759 18:33:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:17.759 18:33:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:17.759 18:33:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:17.759 18:33:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --hostid=0b8484e2-e129-4a11-8748-0b3c728771da -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:26:17.759 18:33:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:26:17.759 18:33:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:17.759 18:33:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:17.759 18:33:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:17.759 18:33:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:20.287 18:33:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:20.287 18:33:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:20.287 18:33:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK11 00:26:20.287 18:33:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:20.287 18:33:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:20.287 18:33:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:20.287 18:33:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:26:20.287 [global] 00:26:20.287 thread=1 00:26:20.287 invalidate=1 00:26:20.287 rw=read 00:26:20.287 time_based=1 00:26:20.287 runtime=10 00:26:20.287 ioengine=libaio 00:26:20.287 direct=1 00:26:20.287 bs=262144 00:26:20.287 iodepth=64 00:26:20.287 norandommap=1 00:26:20.287 numjobs=1 00:26:20.287 00:26:20.287 [job0] 00:26:20.287 filename=/dev/nvme0n1 00:26:20.287 [job1] 00:26:20.287 filename=/dev/nvme10n1 00:26:20.287 [job2] 00:26:20.287 filename=/dev/nvme1n1 00:26:20.287 [job3] 00:26:20.287 filename=/dev/nvme2n1 00:26:20.287 [job4] 00:26:20.287 filename=/dev/nvme3n1 00:26:20.287 [job5] 00:26:20.287 filename=/dev/nvme4n1 00:26:20.287 [job6] 00:26:20.287 filename=/dev/nvme5n1 00:26:20.287 [job7] 00:26:20.287 filename=/dev/nvme6n1 00:26:20.287 [job8] 00:26:20.287 filename=/dev/nvme7n1 00:26:20.287 [job9] 00:26:20.287 filename=/dev/nvme8n1 00:26:20.287 [job10] 00:26:20.287 filename=/dev/nvme9n1 00:26:20.287 Could not set queue depth (nvme0n1) 00:26:20.287 Could not set queue depth (nvme10n1) 00:26:20.287 Could not set queue depth (nvme1n1) 00:26:20.287 Could not set queue depth (nvme2n1) 00:26:20.287 Could not set queue depth (nvme3n1) 00:26:20.287 Could not set queue depth (nvme4n1) 00:26:20.287 Could not set queue depth (nvme5n1) 00:26:20.287 Could not set queue depth (nvme6n1) 00:26:20.287 Could not set queue depth (nvme7n1) 00:26:20.287 Could not set queue depth (nvme8n1) 00:26:20.287 Could not set queue depth (nvme9n1) 00:26:20.287 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:20.287 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:20.287 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:20.287 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:20.287 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:20.287 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:20.287 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:20.287 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:20.287 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:20.287 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:20.287 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:20.287 fio-3.35 00:26:20.287 Starting 11 threads 00:26:32.501 00:26:32.501 job0: (groupid=0, jobs=1): err= 0: pid=94619: Mon Jul 22 18:33:42 2024 00:26:32.501 read: IOPS=290, BW=72.5MiB/s (76.1MB/s)(739MiB/10181msec) 00:26:32.501 slat (usec): min=17, max=184651, avg=3299.93, stdev=13275.85 00:26:32.501 clat (msec): min=84, max=382, avg=216.68, stdev=52.37 00:26:32.501 lat (msec): min=84, max=419, avg=219.98, stdev=54.69 00:26:32.501 clat percentiles (msec): 00:26:32.501 | 1.00th=[ 93], 5.00th=[ 120], 10.00th=[ 131], 20.00th=[ 167], 00:26:32.501 | 30.00th=[ 205], 40.00th=[ 215], 50.00th=[ 226], 60.00th=[ 234], 00:26:32.501 | 70.00th=[ 247], 80.00th=[ 259], 90.00th=[ 275], 95.00th=[ 288], 00:26:32.501 | 99.00th=[ 317], 99.50th=[ 321], 99.90th=[ 384], 99.95th=[ 384], 00:26:32.501 | 99.99th=[ 384] 00:26:32.501 bw ( KiB/s): min=57856, max=136669, per=6.01%, avg=73957.85, stdev=19263.61, samples=20 00:26:32.501 iops : min= 226, max= 533, avg=288.80, stdev=75.09, samples=20 00:26:32.501 lat (msec) : 100=1.96%, 250=70.01%, 500=28.03% 00:26:32.501 cpu : usr=0.14%, sys=1.19%, ctx=636, majf=0, minf=4097 00:26:32.501 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:26:32.501 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:32.501 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:32.501 issued rwts: total=2954,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:32.501 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:32.501 job1: (groupid=0, jobs=1): err= 0: pid=94621: Mon Jul 22 18:33:42 2024 00:26:32.501 read: IOPS=529, BW=132MiB/s (139MB/s)(1336MiB/10082msec) 00:26:32.501 slat (usec): min=15, max=87178, avg=1822.88, stdev=6661.83 00:26:32.501 clat (msec): min=35, max=191, avg=118.52, stdev=22.98 00:26:32.501 lat (msec): min=35, max=224, avg=120.34, stdev=23.95 00:26:32.501 clat percentiles (msec): 00:26:32.501 | 1.00th=[ 59], 5.00th=[ 79], 10.00th=[ 88], 20.00th=[ 101], 00:26:32.501 | 30.00th=[ 110], 40.00th=[ 115], 50.00th=[ 121], 60.00th=[ 126], 00:26:32.501 | 70.00th=[ 130], 80.00th=[ 136], 90.00th=[ 144], 95.00th=[ 153], 00:26:32.501 | 99.00th=[ 180], 99.50th=[ 186], 99.90th=[ 192], 99.95th=[ 192], 00:26:32.501 | 99.99th=[ 192] 00:26:32.501 bw ( KiB/s): min=102706, max=173056, per=10.99%, avg=135142.00, stdev=16238.18, samples=20 00:26:32.501 iops : min= 401, max= 676, avg=527.80, stdev=63.45, samples=20 00:26:32.501 lat (msec) : 50=0.69%, 100=18.72%, 250=80.59% 00:26:32.501 cpu : usr=0.29%, sys=1.82%, ctx=830, majf=0, minf=4097 00:26:32.501 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:26:32.501 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:32.501 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:32.501 issued rwts: total=5343,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:32.501 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:32.501 job2: (groupid=0, jobs=1): err= 0: pid=94622: Mon Jul 22 18:33:42 2024 00:26:32.501 read: IOPS=563, BW=141MiB/s (148MB/s)(1418MiB/10066msec) 00:26:32.501 slat (usec): min=13, max=97293, avg=1694.18, stdev=7001.97 00:26:32.501 clat (msec): min=43, max=330, avg=111.53, stdev=56.71 00:26:32.501 lat (msec): min=43, max=336, avg=113.22, stdev=57.85 00:26:32.501 clat percentiles (msec): 00:26:32.501 | 1.00th=[ 53], 5.00th=[ 64], 10.00th=[ 69], 20.00th=[ 74], 00:26:32.501 | 30.00th=[ 79], 40.00th=[ 82], 50.00th=[ 86], 60.00th=[ 92], 00:26:32.501 | 70.00th=[ 109], 80.00th=[ 136], 90.00th=[ 224], 95.00th=[ 236], 00:26:32.501 | 99.00th=[ 264], 99.50th=[ 275], 99.90th=[ 279], 99.95th=[ 292], 00:26:32.501 | 99.99th=[ 330] 00:26:32.501 bw ( KiB/s): min=63361, max=224256, per=11.68%, avg=143600.65, stdev=61259.42, samples=20 00:26:32.501 iops : min= 247, max= 876, avg=560.75, stdev=239.40, samples=20 00:26:32.501 lat (msec) : 50=0.26%, 100=65.93%, 250=31.61%, 500=2.20% 00:26:32.501 cpu : usr=0.21%, sys=2.27%, ctx=1025, majf=0, minf=4097 00:26:32.501 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:26:32.501 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:32.501 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:32.501 issued rwts: total=5673,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:32.501 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:32.501 job3: (groupid=0, jobs=1): err= 0: pid=94623: Mon Jul 22 18:33:42 2024 00:26:32.501 read: IOPS=685, BW=171MiB/s (180MB/s)(1723MiB/10054msec) 00:26:32.501 slat (usec): min=17, max=68721, avg=1407.61, stdev=5377.87 00:26:32.501 clat (msec): min=40, max=269, avg=91.71, stdev=25.62 00:26:32.501 lat (msec): min=40, max=269, avg=93.12, stdev=26.13 00:26:32.501 clat percentiles (msec): 00:26:32.502 | 1.00th=[ 58], 5.00th=[ 64], 10.00th=[ 69], 20.00th=[ 74], 00:26:32.502 | 30.00th=[ 79], 40.00th=[ 83], 50.00th=[ 86], 60.00th=[ 90], 00:26:32.502 | 70.00th=[ 95], 80.00th=[ 104], 90.00th=[ 124], 95.00th=[ 146], 00:26:32.502 | 99.00th=[ 188], 99.50th=[ 213], 99.90th=[ 247], 99.95th=[ 271], 00:26:32.502 | 99.99th=[ 271] 00:26:32.502 bw ( KiB/s): min=102912, max=216654, per=14.21%, avg=174753.95, stdev=35764.03, samples=20 00:26:32.502 iops : min= 402, max= 846, avg=682.60, stdev=139.67, samples=20 00:26:32.502 lat (msec) : 50=0.13%, 100=77.58%, 250=22.20%, 500=0.09% 00:26:32.502 cpu : usr=0.24%, sys=2.43%, ctx=1105, majf=0, minf=4097 00:26:32.502 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:26:32.502 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:32.502 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:32.502 issued rwts: total=6892,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:32.502 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:32.502 job4: (groupid=0, jobs=1): err= 0: pid=94624: Mon Jul 22 18:33:42 2024 00:26:32.502 read: IOPS=292, BW=73.1MiB/s (76.6MB/s)(745MiB/10196msec) 00:26:32.502 slat (usec): min=17, max=154421, avg=3341.45, stdev=13217.60 00:26:32.502 clat (msec): min=18, max=397, avg=215.10, stdev=65.64 00:26:32.502 lat (msec): min=18, max=428, avg=218.44, stdev=67.61 00:26:32.502 clat percentiles (msec): 00:26:32.502 | 1.00th=[ 33], 5.00th=[ 105], 10.00th=[ 123], 20.00th=[ 148], 00:26:32.502 | 30.00th=[ 205], 40.00th=[ 224], 50.00th=[ 232], 60.00th=[ 241], 00:26:32.502 | 70.00th=[ 249], 80.00th=[ 266], 90.00th=[ 279], 95.00th=[ 292], 00:26:32.502 | 99.00th=[ 376], 99.50th=[ 388], 99.90th=[ 397], 99.95th=[ 397], 00:26:32.502 | 99.99th=[ 397] 00:26:32.502 bw ( KiB/s): min=52736, max=151552, per=6.07%, avg=74659.80, stdev=24524.34, samples=20 00:26:32.502 iops : min= 206, max= 592, avg=291.50, stdev=95.71, samples=20 00:26:32.502 lat (msec) : 20=0.07%, 50=2.42%, 100=2.08%, 250=66.02%, 500=29.42% 00:26:32.502 cpu : usr=0.12%, sys=1.16%, ctx=513, majf=0, minf=4097 00:26:32.502 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:26:32.502 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:32.502 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:32.502 issued rwts: total=2981,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:32.502 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:32.502 job5: (groupid=0, jobs=1): err= 0: pid=94625: Mon Jul 22 18:33:42 2024 00:26:32.502 read: IOPS=431, BW=108MiB/s (113MB/s)(1099MiB/10192msec) 00:26:32.502 slat (usec): min=17, max=168568, avg=2180.44, stdev=10884.83 00:26:32.502 clat (msec): min=18, max=420, avg=145.78, stdev=76.73 00:26:32.502 lat (msec): min=18, max=442, avg=147.96, stdev=78.47 00:26:32.502 clat percentiles (msec): 00:26:32.502 | 1.00th=[ 37], 5.00th=[ 64], 10.00th=[ 73], 20.00th=[ 84], 00:26:32.502 | 30.00th=[ 94], 40.00th=[ 105], 50.00th=[ 115], 60.00th=[ 126], 00:26:32.502 | 70.00th=[ 186], 80.00th=[ 234], 90.00th=[ 264], 95.00th=[ 284], 00:26:32.502 | 99.00th=[ 334], 99.50th=[ 347], 99.90th=[ 414], 99.95th=[ 422], 00:26:32.502 | 99.99th=[ 422] 00:26:32.502 bw ( KiB/s): min=54272, max=188416, per=9.01%, avg=110850.55, stdev=51790.18, samples=20 00:26:32.502 iops : min= 212, max= 736, avg=432.95, stdev=202.29, samples=20 00:26:32.502 lat (msec) : 20=0.43%, 50=1.46%, 100=33.74%, 250=48.40%, 500=15.97% 00:26:32.502 cpu : usr=0.14%, sys=1.67%, ctx=775, majf=0, minf=4097 00:26:32.502 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:26:32.502 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:32.502 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:32.502 issued rwts: total=4395,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:32.502 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:32.502 job6: (groupid=0, jobs=1): err= 0: pid=94626: Mon Jul 22 18:33:42 2024 00:26:32.502 read: IOPS=551, BW=138MiB/s (144MB/s)(1389MiB/10085msec) 00:26:32.502 slat (usec): min=16, max=149270, avg=1687.18, stdev=6725.25 00:26:32.502 clat (usec): min=1549, max=269026, avg=114194.50, stdev=34642.47 00:26:32.502 lat (usec): min=1577, max=372534, avg=115881.68, stdev=35472.16 00:26:32.502 clat percentiles (msec): 00:26:32.502 | 1.00th=[ 4], 5.00th=[ 36], 10.00th=[ 85], 20.00th=[ 102], 00:26:32.502 | 30.00th=[ 109], 40.00th=[ 114], 50.00th=[ 118], 60.00th=[ 124], 00:26:32.502 | 70.00th=[ 128], 80.00th=[ 133], 90.00th=[ 142], 95.00th=[ 153], 00:26:32.502 | 99.00th=[ 226], 99.50th=[ 243], 99.90th=[ 253], 99.95th=[ 253], 00:26:32.502 | 99.99th=[ 271] 00:26:32.502 bw ( KiB/s): min=119296, max=204391, per=11.43%, avg=140592.60, stdev=23819.97, samples=20 00:26:32.502 iops : min= 466, max= 798, avg=549.05, stdev=92.99, samples=20 00:26:32.502 lat (msec) : 2=0.11%, 4=0.95%, 10=2.14%, 20=0.61%, 50=3.20% 00:26:32.502 lat (msec) : 100=12.07%, 250=80.53%, 500=0.38% 00:26:32.502 cpu : usr=0.25%, sys=2.07%, ctx=1027, majf=0, minf=4097 00:26:32.502 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:26:32.502 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:32.502 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:32.502 issued rwts: total=5557,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:32.502 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:32.502 job7: (groupid=0, jobs=1): err= 0: pid=94627: Mon Jul 22 18:33:42 2024 00:26:32.502 read: IOPS=623, BW=156MiB/s (164MB/s)(1573MiB/10085msec) 00:26:32.502 slat (usec): min=13, max=71241, avg=1534.65, stdev=5838.33 00:26:32.502 clat (msec): min=26, max=199, avg=100.76, stdev=35.47 00:26:32.502 lat (msec): min=26, max=206, avg=102.30, stdev=36.11 00:26:32.502 clat percentiles (msec): 00:26:32.502 | 1.00th=[ 29], 5.00th=[ 39], 10.00th=[ 47], 20.00th=[ 71], 00:26:32.502 | 30.00th=[ 82], 40.00th=[ 94], 50.00th=[ 107], 60.00th=[ 114], 00:26:32.502 | 70.00th=[ 124], 80.00th=[ 133], 90.00th=[ 142], 95.00th=[ 153], 00:26:32.502 | 99.00th=[ 176], 99.50th=[ 188], 99.90th=[ 190], 99.95th=[ 192], 00:26:32.502 | 99.99th=[ 201] 00:26:32.502 bw ( KiB/s): min=103936, max=340822, per=12.96%, avg=159391.30, stdev=57283.32, samples=20 00:26:32.502 iops : min= 406, max= 1331, avg=622.50, stdev=223.72, samples=20 00:26:32.502 lat (msec) : 50=13.22%, 100=31.71%, 250=55.07% 00:26:32.502 cpu : usr=0.16%, sys=2.23%, ctx=1138, majf=0, minf=4097 00:26:32.502 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:26:32.502 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:32.502 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:32.502 issued rwts: total=6292,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:32.502 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:32.502 job8: (groupid=0, jobs=1): err= 0: pid=94628: Mon Jul 22 18:33:42 2024 00:26:32.502 read: IOPS=290, BW=72.6MiB/s (76.2MB/s)(739MiB/10179msec) 00:26:32.502 slat (usec): min=18, max=165370, avg=3359.72, stdev=13967.59 00:26:32.502 clat (msec): min=21, max=402, avg=216.35, stdev=61.77 00:26:32.502 lat (msec): min=21, max=419, avg=219.71, stdev=64.08 00:26:32.502 clat percentiles (msec): 00:26:32.502 | 1.00th=[ 36], 5.00th=[ 107], 10.00th=[ 124], 20.00th=[ 150], 00:26:32.502 | 30.00th=[ 209], 40.00th=[ 222], 50.00th=[ 232], 60.00th=[ 243], 00:26:32.502 | 70.00th=[ 253], 80.00th=[ 266], 90.00th=[ 284], 95.00th=[ 292], 00:26:32.502 | 99.00th=[ 313], 99.50th=[ 326], 99.90th=[ 368], 99.95th=[ 384], 00:26:32.502 | 99.99th=[ 401] 00:26:32.502 bw ( KiB/s): min=49152, max=142562, per=6.02%, avg=74005.85, stdev=23892.82, samples=20 00:26:32.502 iops : min= 192, max= 556, avg=288.95, stdev=93.08, samples=20 00:26:32.502 lat (msec) : 50=2.00%, 100=2.16%, 250=62.26%, 500=33.58% 00:26:32.502 cpu : usr=0.11%, sys=1.17%, ctx=554, majf=0, minf=4097 00:26:32.502 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:26:32.502 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:32.502 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:32.502 issued rwts: total=2957,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:32.502 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:32.502 job9: (groupid=0, jobs=1): err= 0: pid=94629: Mon Jul 22 18:33:42 2024 00:26:32.502 read: IOPS=304, BW=76.2MiB/s (79.9MB/s)(777MiB/10196msec) 00:26:32.502 slat (usec): min=19, max=106187, avg=3147.36, stdev=10688.02 00:26:32.502 clat (msec): min=23, max=441, avg=206.43, stdev=69.80 00:26:32.502 lat (msec): min=23, max=441, avg=209.58, stdev=71.45 00:26:32.502 clat percentiles (msec): 00:26:32.502 | 1.00th=[ 46], 5.00th=[ 72], 10.00th=[ 117], 20.00th=[ 133], 00:26:32.502 | 30.00th=[ 155], 40.00th=[ 215], 50.00th=[ 228], 60.00th=[ 239], 00:26:32.502 | 70.00th=[ 249], 80.00th=[ 268], 90.00th=[ 284], 95.00th=[ 296], 00:26:32.502 | 99.00th=[ 326], 99.50th=[ 347], 99.90th=[ 409], 99.95th=[ 443], 00:26:32.502 | 99.99th=[ 443] 00:26:32.502 bw ( KiB/s): min=53760, max=154826, per=6.34%, avg=77927.40, stdev=27443.63, samples=20 00:26:32.502 iops : min= 210, max= 604, avg=304.25, stdev=107.04, samples=20 00:26:32.502 lat (msec) : 50=1.99%, 100=5.47%, 250=63.38%, 500=29.15% 00:26:32.502 cpu : usr=0.14%, sys=1.28%, ctx=653, majf=0, minf=4097 00:26:32.502 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:26:32.502 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:32.502 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:32.502 issued rwts: total=3108,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:32.502 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:32.502 job10: (groupid=0, jobs=1): err= 0: pid=94630: Mon Jul 22 18:33:42 2024 00:26:32.502 read: IOPS=277, BW=69.4MiB/s (72.7MB/s)(707MiB/10196msec) 00:26:32.502 slat (usec): min=18, max=102683, avg=3489.23, stdev=11276.48 00:26:32.502 clat (msec): min=48, max=421, avg=226.36, stdev=59.43 00:26:32.502 lat (msec): min=48, max=421, avg=229.85, stdev=61.24 00:26:32.502 clat percentiles (msec): 00:26:32.502 | 1.00th=[ 81], 5.00th=[ 115], 10.00th=[ 127], 20.00th=[ 165], 00:26:32.502 | 30.00th=[ 222], 40.00th=[ 232], 50.00th=[ 241], 60.00th=[ 249], 00:26:32.502 | 70.00th=[ 257], 80.00th=[ 275], 90.00th=[ 292], 95.00th=[ 305], 00:26:32.502 | 99.00th=[ 326], 99.50th=[ 393], 99.90th=[ 422], 99.95th=[ 422], 00:26:32.502 | 99.99th=[ 422] 00:26:32.502 bw ( KiB/s): min=54272, max=118509, per=5.76%, avg=70806.65, stdev=18603.44, samples=20 00:26:32.502 iops : min= 212, max= 462, avg=276.40, stdev=72.47, samples=20 00:26:32.502 lat (msec) : 50=0.18%, 100=1.80%, 250=61.58%, 500=36.44% 00:26:32.502 cpu : usr=0.11%, sys=1.16%, ctx=537, majf=0, minf=4097 00:26:32.502 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.1%, >=64=97.8% 00:26:32.503 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:32.503 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:32.503 issued rwts: total=2829,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:32.503 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:32.503 00:26:32.503 Run status group 0 (all jobs): 00:26:32.503 READ: bw=1201MiB/s (1259MB/s), 69.4MiB/s-171MiB/s (72.7MB/s-180MB/s), io=12.0GiB (12.8GB), run=10054-10196msec 00:26:32.503 00:26:32.503 Disk stats (read/write): 00:26:32.503 nvme0n1: ios=5736/0, merge=0/0, ticks=1233086/0, in_queue=1233086, util=97.15% 00:26:32.503 nvme10n1: ios=10558/0, merge=0/0, ticks=1236215/0, in_queue=1236215, util=97.35% 00:26:32.503 nvme1n1: ios=11222/0, merge=0/0, ticks=1239376/0, in_queue=1239376, util=97.76% 00:26:32.503 nvme2n1: ios=13593/0, merge=0/0, ticks=1231925/0, in_queue=1231925, util=97.21% 00:26:32.503 nvme3n1: ios=5834/0, merge=0/0, ticks=1226417/0, in_queue=1226417, util=97.79% 00:26:32.503 nvme4n1: ios=8663/0, merge=0/0, ticks=1226879/0, in_queue=1226879, util=97.75% 00:26:32.503 nvme5n1: ios=10989/0, merge=0/0, ticks=1237771/0, in_queue=1237771, util=97.99% 00:26:32.503 nvme6n1: ios=12507/0, merge=0/0, ticks=1235016/0, in_queue=1235016, util=97.74% 00:26:32.503 nvme7n1: ios=5786/0, merge=0/0, ticks=1234066/0, in_queue=1234066, util=98.27% 00:26:32.503 nvme8n1: ios=6088/0, merge=0/0, ticks=1235656/0, in_queue=1235656, util=98.81% 00:26:32.503 nvme9n1: ios=5531/0, merge=0/0, ticks=1235362/0, in_queue=1235362, util=98.72% 00:26:32.503 18:33:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:26:32.503 [global] 00:26:32.503 thread=1 00:26:32.503 invalidate=1 00:26:32.503 rw=randwrite 00:26:32.503 time_based=1 00:26:32.503 runtime=10 00:26:32.503 ioengine=libaio 00:26:32.503 direct=1 00:26:32.503 bs=262144 00:26:32.503 iodepth=64 00:26:32.503 norandommap=1 00:26:32.503 numjobs=1 00:26:32.503 00:26:32.503 [job0] 00:26:32.503 filename=/dev/nvme0n1 00:26:32.503 [job1] 00:26:32.503 filename=/dev/nvme10n1 00:26:32.503 [job2] 00:26:32.503 filename=/dev/nvme1n1 00:26:32.503 [job3] 00:26:32.503 filename=/dev/nvme2n1 00:26:32.503 [job4] 00:26:32.503 filename=/dev/nvme3n1 00:26:32.503 [job5] 00:26:32.503 filename=/dev/nvme4n1 00:26:32.503 [job6] 00:26:32.503 filename=/dev/nvme5n1 00:26:32.503 [job7] 00:26:32.503 filename=/dev/nvme6n1 00:26:32.503 [job8] 00:26:32.503 filename=/dev/nvme7n1 00:26:32.503 [job9] 00:26:32.503 filename=/dev/nvme8n1 00:26:32.503 [job10] 00:26:32.503 filename=/dev/nvme9n1 00:26:32.503 Could not set queue depth (nvme0n1) 00:26:32.503 Could not set queue depth (nvme10n1) 00:26:32.503 Could not set queue depth (nvme1n1) 00:26:32.503 Could not set queue depth (nvme2n1) 00:26:32.503 Could not set queue depth (nvme3n1) 00:26:32.503 Could not set queue depth (nvme4n1) 00:26:32.503 Could not set queue depth (nvme5n1) 00:26:32.503 Could not set queue depth (nvme6n1) 00:26:32.503 Could not set queue depth (nvme7n1) 00:26:32.503 Could not set queue depth (nvme8n1) 00:26:32.503 Could not set queue depth (nvme9n1) 00:26:32.503 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:32.503 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:32.503 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:32.503 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:32.503 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:32.503 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:32.503 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:32.503 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:32.503 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:32.503 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:32.503 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:32.503 fio-3.35 00:26:32.503 Starting 11 threads 00:26:42.477 00:26:42.477 job0: (groupid=0, jobs=1): err= 0: pid=94820: Mon Jul 22 18:33:53 2024 00:26:42.477 write: IOPS=304, BW=76.0MiB/s (79.7MB/s)(776MiB/10203msec); 0 zone resets 00:26:42.477 slat (usec): min=26, max=26321, avg=3173.50, stdev=5605.65 00:26:42.477 clat (msec): min=16, max=400, avg=207.22, stdev=29.90 00:26:42.477 lat (msec): min=16, max=400, avg=210.39, stdev=29.91 00:26:42.477 clat percentiles (msec): 00:26:42.477 | 1.00th=[ 80], 5.00th=[ 186], 10.00th=[ 190], 20.00th=[ 199], 00:26:42.477 | 30.00th=[ 201], 40.00th=[ 205], 50.00th=[ 207], 60.00th=[ 211], 00:26:42.477 | 70.00th=[ 213], 80.00th=[ 220], 90.00th=[ 228], 95.00th=[ 236], 00:26:42.477 | 99.00th=[ 296], 99.50th=[ 351], 99.90th=[ 388], 99.95th=[ 401], 00:26:42.477 | 99.99th=[ 401] 00:26:42.477 bw ( KiB/s): min=65536, max=90112, per=6.67%, avg=77776.35, stdev=4964.75, samples=20 00:26:42.477 iops : min= 256, max= 352, avg=303.75, stdev=19.46, samples=20 00:26:42.477 lat (msec) : 20=0.06%, 50=0.42%, 100=0.84%, 250=95.07%, 500=3.61% 00:26:42.477 cpu : usr=1.08%, sys=1.01%, ctx=2320, majf=0, minf=1 00:26:42.477 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:26:42.477 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.477 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:42.477 issued rwts: total=0,3102,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:42.477 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:42.477 job1: (groupid=0, jobs=1): err= 0: pid=94821: Mon Jul 22 18:33:53 2024 00:26:42.477 write: IOPS=544, BW=136MiB/s (143MB/s)(1374MiB/10083msec); 0 zone resets 00:26:42.477 slat (usec): min=19, max=36498, avg=1814.31, stdev=3135.71 00:26:42.477 clat (msec): min=40, max=245, avg=115.60, stdev=14.77 00:26:42.477 lat (msec): min=40, max=245, avg=117.42, stdev=14.64 00:26:42.477 clat percentiles (msec): 00:26:42.477 | 1.00th=[ 100], 5.00th=[ 103], 10.00th=[ 104], 20.00th=[ 108], 00:26:42.478 | 30.00th=[ 110], 40.00th=[ 110], 50.00th=[ 111], 60.00th=[ 115], 00:26:42.478 | 70.00th=[ 120], 80.00th=[ 123], 90.00th=[ 129], 95.00th=[ 136], 00:26:42.478 | 99.00th=[ 186], 99.50th=[ 213], 99.90th=[ 234], 99.95th=[ 245], 00:26:42.478 | 99.99th=[ 245] 00:26:42.478 bw ( KiB/s): min=96256, max=154112, per=11.91%, avg=138991.85, stdev=13657.59, samples=20 00:26:42.478 iops : min= 376, max= 602, avg=542.80, stdev=53.36, samples=20 00:26:42.478 lat (msec) : 50=0.15%, 100=1.26%, 250=98.60% 00:26:42.478 cpu : usr=1.56%, sys=1.74%, ctx=6225, majf=0, minf=1 00:26:42.478 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:26:42.478 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.478 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:42.478 issued rwts: total=0,5494,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:42.478 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:42.478 job2: (groupid=0, jobs=1): err= 0: pid=94830: Mon Jul 22 18:33:53 2024 00:26:42.478 write: IOPS=386, BW=96.6MiB/s (101MB/s)(979MiB/10136msec); 0 zone resets 00:26:42.478 slat (usec): min=25, max=46034, avg=2528.23, stdev=4389.41 00:26:42.478 clat (msec): min=11, max=283, avg=163.00, stdev=21.06 00:26:42.478 lat (msec): min=11, max=283, avg=165.52, stdev=20.93 00:26:42.478 clat percentiles (msec): 00:26:42.478 | 1.00th=[ 79], 5.00th=[ 146], 10.00th=[ 146], 20.00th=[ 155], 00:26:42.478 | 30.00th=[ 155], 40.00th=[ 157], 50.00th=[ 159], 60.00th=[ 163], 00:26:42.478 | 70.00th=[ 169], 80.00th=[ 174], 90.00th=[ 186], 95.00th=[ 199], 00:26:42.478 | 99.00th=[ 222], 99.50th=[ 234], 99.90th=[ 275], 99.95th=[ 284], 00:26:42.478 | 99.99th=[ 284] 00:26:42.478 bw ( KiB/s): min=81920, max=107008, per=8.45%, avg=98634.90, stdev=7889.19, samples=20 00:26:42.478 iops : min= 320, max= 418, avg=385.25, stdev=30.86, samples=20 00:26:42.478 lat (msec) : 20=0.08%, 50=0.46%, 100=0.66%, 250=98.44%, 500=0.36% 00:26:42.478 cpu : usr=1.20%, sys=1.41%, ctx=5380, majf=0, minf=1 00:26:42.478 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:26:42.478 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.478 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:42.478 issued rwts: total=0,3917,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:42.478 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:42.478 job3: (groupid=0, jobs=1): err= 0: pid=94834: Mon Jul 22 18:33:53 2024 00:26:42.478 write: IOPS=303, BW=76.0MiB/s (79.7MB/s)(776MiB/10205msec); 0 zone resets 00:26:42.478 slat (usec): min=26, max=45401, avg=3138.87, stdev=5575.86 00:26:42.478 clat (msec): min=11, max=399, avg=207.25, stdev=29.15 00:26:42.478 lat (msec): min=11, max=399, avg=210.39, stdev=29.04 00:26:42.478 clat percentiles (msec): 00:26:42.478 | 1.00th=[ 106], 5.00th=[ 184], 10.00th=[ 188], 20.00th=[ 197], 00:26:42.478 | 30.00th=[ 199], 40.00th=[ 201], 50.00th=[ 205], 60.00th=[ 209], 00:26:42.478 | 70.00th=[ 215], 80.00th=[ 222], 90.00th=[ 232], 95.00th=[ 241], 00:26:42.478 | 99.00th=[ 296], 99.50th=[ 347], 99.90th=[ 388], 99.95th=[ 401], 00:26:42.478 | 99.99th=[ 401] 00:26:42.478 bw ( KiB/s): min=65536, max=83968, per=6.67%, avg=77782.60, stdev=5213.87, samples=20 00:26:42.478 iops : min= 256, max= 328, avg=303.80, stdev=20.36, samples=20 00:26:42.478 lat (msec) : 20=0.10%, 50=0.71%, 100=0.10%, 250=95.42%, 500=3.68% 00:26:42.478 cpu : usr=0.95%, sys=1.02%, ctx=2569, majf=0, minf=1 00:26:42.478 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:26:42.478 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.478 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:42.478 issued rwts: total=0,3102,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:42.478 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:42.478 job4: (groupid=0, jobs=1): err= 0: pid=94835: Mon Jul 22 18:33:53 2024 00:26:42.478 write: IOPS=385, BW=96.4MiB/s (101MB/s)(979MiB/10146msec); 0 zone resets 00:26:42.478 slat (usec): min=17, max=34396, avg=2551.57, stdev=4404.43 00:26:42.478 clat (msec): min=10, max=293, avg=163.21, stdev=20.33 00:26:42.478 lat (msec): min=11, max=293, avg=165.76, stdev=20.15 00:26:42.478 clat percentiles (msec): 00:26:42.478 | 1.00th=[ 121], 5.00th=[ 146], 10.00th=[ 146], 20.00th=[ 155], 00:26:42.478 | 30.00th=[ 155], 40.00th=[ 157], 50.00th=[ 159], 60.00th=[ 163], 00:26:42.478 | 70.00th=[ 169], 80.00th=[ 174], 90.00th=[ 184], 95.00th=[ 199], 00:26:42.478 | 99.00th=[ 224], 99.50th=[ 245], 99.90th=[ 284], 99.95th=[ 292], 00:26:42.478 | 99.99th=[ 292] 00:26:42.478 bw ( KiB/s): min=81920, max=107008, per=8.45%, avg=98589.05, stdev=8110.64, samples=20 00:26:42.478 iops : min= 320, max= 418, avg=384.85, stdev=31.61, samples=20 00:26:42.478 lat (msec) : 20=0.23%, 50=0.20%, 100=0.31%, 250=98.80%, 500=0.46% 00:26:42.478 cpu : usr=1.30%, sys=1.01%, ctx=4389, majf=0, minf=1 00:26:42.478 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:26:42.478 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.478 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:42.478 issued rwts: total=0,3914,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:42.478 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:42.478 job5: (groupid=0, jobs=1): err= 0: pid=94837: Mon Jul 22 18:33:53 2024 00:26:42.478 write: IOPS=568, BW=142MiB/s (149MB/s)(1434MiB/10088msec); 0 zone resets 00:26:42.478 slat (usec): min=24, max=12974, avg=1737.17, stdev=2939.39 00:26:42.478 clat (msec): min=17, max=189, avg=110.74, stdev=11.63 00:26:42.478 lat (msec): min=17, max=189, avg=112.47, stdev=11.44 00:26:42.478 clat percentiles (msec): 00:26:42.478 | 1.00th=[ 97], 5.00th=[ 99], 10.00th=[ 100], 20.00th=[ 105], 00:26:42.478 | 30.00th=[ 105], 40.00th=[ 106], 50.00th=[ 107], 60.00th=[ 112], 00:26:42.478 | 70.00th=[ 115], 80.00th=[ 120], 90.00th=[ 125], 95.00th=[ 132], 00:26:42.478 | 99.00th=[ 146], 99.50th=[ 153], 99.90th=[ 184], 99.95th=[ 184], 00:26:42.478 | 99.99th=[ 190] 00:26:42.478 bw ( KiB/s): min=120320, max=158208, per=12.45%, avg=145252.35, stdev=12097.70, samples=20 00:26:42.478 iops : min= 470, max= 618, avg=567.30, stdev=47.34, samples=20 00:26:42.478 lat (msec) : 20=0.05%, 50=0.21%, 100=12.39%, 250=87.35% 00:26:42.478 cpu : usr=1.79%, sys=1.66%, ctx=6794, majf=0, minf=1 00:26:42.478 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:26:42.478 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.478 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:42.478 issued rwts: total=0,5737,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:42.478 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:42.478 job6: (groupid=0, jobs=1): err= 0: pid=94841: Mon Jul 22 18:33:53 2024 00:26:42.478 write: IOPS=567, BW=142MiB/s (149MB/s)(1432MiB/10090msec); 0 zone resets 00:26:42.478 slat (usec): min=24, max=14032, avg=1719.93, stdev=2933.96 00:26:42.478 clat (msec): min=18, max=195, avg=111.01, stdev=12.10 00:26:42.478 lat (msec): min=18, max=195, avg=112.73, stdev=11.86 00:26:42.478 clat percentiles (msec): 00:26:42.478 | 1.00th=[ 97], 5.00th=[ 99], 10.00th=[ 100], 20.00th=[ 105], 00:26:42.478 | 30.00th=[ 105], 40.00th=[ 106], 50.00th=[ 107], 60.00th=[ 112], 00:26:42.478 | 70.00th=[ 115], 80.00th=[ 120], 90.00th=[ 126], 95.00th=[ 132], 00:26:42.478 | 99.00th=[ 150], 99.50th=[ 169], 99.90th=[ 190], 99.95th=[ 192], 00:26:42.478 | 99.99th=[ 197] 00:26:42.478 bw ( KiB/s): min=117248, max=157696, per=12.42%, avg=144931.95, stdev=12420.64, samples=20 00:26:42.478 iops : min= 458, max= 616, avg=566.05, stdev=48.56, samples=20 00:26:42.478 lat (msec) : 20=0.05%, 50=0.19%, 100=11.42%, 250=88.33% 00:26:42.478 cpu : usr=1.55%, sys=1.61%, ctx=6536, majf=0, minf=1 00:26:42.478 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:26:42.478 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.478 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:42.478 issued rwts: total=0,5726,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:42.478 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:42.478 job7: (groupid=0, jobs=1): err= 0: pid=94842: Mon Jul 22 18:33:53 2024 00:26:42.478 write: IOPS=384, BW=96.2MiB/s (101MB/s)(976MiB/10139msec); 0 zone resets 00:26:42.478 slat (usec): min=28, max=30691, avg=2556.91, stdev=4401.94 00:26:42.478 clat (msec): min=10, max=290, avg=163.58, stdev=19.47 00:26:42.478 lat (msec): min=10, max=290, avg=166.13, stdev=19.25 00:26:42.478 clat percentiles (msec): 00:26:42.478 | 1.00th=[ 144], 5.00th=[ 146], 10.00th=[ 146], 20.00th=[ 155], 00:26:42.478 | 30.00th=[ 155], 40.00th=[ 157], 50.00th=[ 159], 60.00th=[ 163], 00:26:42.478 | 70.00th=[ 169], 80.00th=[ 174], 90.00th=[ 186], 95.00th=[ 199], 00:26:42.478 | 99.00th=[ 222], 99.50th=[ 241], 99.90th=[ 279], 99.95th=[ 292], 00:26:42.478 | 99.99th=[ 292] 00:26:42.478 bw ( KiB/s): min=81920, max=106496, per=8.42%, avg=98283.70, stdev=8171.75, samples=20 00:26:42.478 iops : min= 320, max= 416, avg=383.90, stdev=31.92, samples=20 00:26:42.478 lat (msec) : 20=0.08%, 50=0.20%, 100=0.41%, 250=98.85%, 500=0.46% 00:26:42.478 cpu : usr=1.24%, sys=1.21%, ctx=5134, majf=0, minf=1 00:26:42.478 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:26:42.478 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.478 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:42.478 issued rwts: total=0,3903,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:42.478 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:42.478 job8: (groupid=0, jobs=1): err= 0: pid=94843: Mon Jul 22 18:33:53 2024 00:26:42.478 write: IOPS=544, BW=136MiB/s (143MB/s)(1375MiB/10097msec); 0 zone resets 00:26:42.478 slat (usec): min=23, max=26842, avg=1812.53, stdev=3103.00 00:26:42.478 clat (msec): min=10, max=222, avg=115.62, stdev=15.42 00:26:42.478 lat (msec): min=10, max=222, avg=117.43, stdev=15.32 00:26:42.478 clat percentiles (msec): 00:26:42.478 | 1.00th=[ 99], 5.00th=[ 103], 10.00th=[ 104], 20.00th=[ 108], 00:26:42.478 | 30.00th=[ 110], 40.00th=[ 110], 50.00th=[ 112], 60.00th=[ 115], 00:26:42.478 | 70.00th=[ 121], 80.00th=[ 123], 90.00th=[ 129], 95.00th=[ 136], 00:26:42.478 | 99.00th=[ 194], 99.50th=[ 211], 99.90th=[ 220], 99.95th=[ 222], 00:26:42.478 | 99.99th=[ 224] 00:26:42.478 bw ( KiB/s): min=99328, max=153600, per=11.92%, avg=139120.80, stdev=13155.63, samples=20 00:26:42.478 iops : min= 388, max= 600, avg=543.35, stdev=51.36, samples=20 00:26:42.478 lat (msec) : 20=0.07%, 50=0.22%, 100=1.29%, 250=98.42% 00:26:42.478 cpu : usr=1.73%, sys=1.80%, ctx=6570, majf=0, minf=1 00:26:42.478 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:26:42.479 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.479 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:42.479 issued rwts: total=0,5499,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:42.479 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:42.479 job9: (groupid=0, jobs=1): err= 0: pid=94844: Mon Jul 22 18:33:53 2024 00:26:42.479 write: IOPS=301, BW=75.4MiB/s (79.1MB/s)(770MiB/10205msec); 0 zone resets 00:26:42.479 slat (usec): min=28, max=62673, avg=3246.15, stdev=5769.79 00:26:42.479 clat (msec): min=8, max=397, avg=208.75, stdev=32.90 00:26:42.479 lat (msec): min=8, max=397, avg=212.00, stdev=32.86 00:26:42.479 clat percentiles (msec): 00:26:42.479 | 1.00th=[ 46], 5.00th=[ 184], 10.00th=[ 190], 20.00th=[ 197], 00:26:42.479 | 30.00th=[ 199], 40.00th=[ 203], 50.00th=[ 207], 60.00th=[ 211], 00:26:42.479 | 70.00th=[ 218], 80.00th=[ 222], 90.00th=[ 236], 95.00th=[ 255], 00:26:42.479 | 99.00th=[ 296], 99.50th=[ 347], 99.90th=[ 384], 99.95th=[ 397], 00:26:42.479 | 99.99th=[ 397] 00:26:42.479 bw ( KiB/s): min=61440, max=83968, per=6.61%, avg=77169.60, stdev=5490.28, samples=20 00:26:42.479 iops : min= 240, max= 328, avg=301.35, stdev=21.50, samples=20 00:26:42.479 lat (msec) : 10=0.16%, 20=0.13%, 50=0.78%, 100=0.78%, 250=92.17% 00:26:42.479 lat (msec) : 500=5.98% 00:26:42.479 cpu : usr=0.91%, sys=1.08%, ctx=3141, majf=0, minf=1 00:26:42.479 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:26:42.479 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.479 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:42.479 issued rwts: total=0,3078,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:42.479 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:42.479 job10: (groupid=0, jobs=1): err= 0: pid=94845: Mon Jul 22 18:33:53 2024 00:26:42.479 write: IOPS=298, BW=74.5MiB/s (78.2MB/s)(760MiB/10196msec); 0 zone resets 00:26:42.479 slat (usec): min=21, max=44589, avg=3285.91, stdev=5780.39 00:26:42.479 clat (msec): min=48, max=389, avg=211.25, stdev=26.08 00:26:42.479 lat (msec): min=48, max=389, avg=214.53, stdev=25.81 00:26:42.479 clat percentiles (msec): 00:26:42.479 | 1.00th=[ 110], 5.00th=[ 188], 10.00th=[ 194], 20.00th=[ 199], 00:26:42.479 | 30.00th=[ 203], 40.00th=[ 207], 50.00th=[ 209], 60.00th=[ 211], 00:26:42.479 | 70.00th=[ 218], 80.00th=[ 222], 90.00th=[ 234], 95.00th=[ 247], 00:26:42.479 | 99.00th=[ 288], 99.50th=[ 338], 99.90th=[ 376], 99.95th=[ 388], 00:26:42.479 | 99.99th=[ 388] 00:26:42.479 bw ( KiB/s): min=63488, max=81920, per=6.53%, avg=76181.45, stdev=4840.81, samples=20 00:26:42.479 iops : min= 248, max= 320, avg=297.50, stdev=18.96, samples=20 00:26:42.479 lat (msec) : 50=0.13%, 100=0.79%, 250=94.74%, 500=4.34% 00:26:42.479 cpu : usr=0.99%, sys=1.01%, ctx=2871, majf=0, minf=1 00:26:42.479 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:26:42.479 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.479 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:42.479 issued rwts: total=0,3040,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:42.479 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:42.479 00:26:42.479 Run status group 0 (all jobs): 00:26:42.479 WRITE: bw=1139MiB/s (1195MB/s), 74.5MiB/s-142MiB/s (78.2MB/s-149MB/s), io=11.4GiB (12.2GB), run=10083-10205msec 00:26:42.479 00:26:42.479 Disk stats (read/write): 00:26:42.479 nvme0n1: ios=50/6075, merge=0/0, ticks=62/1211084, in_queue=1211146, util=98.05% 00:26:42.479 nvme10n1: ios=49/10842, merge=0/0, ticks=63/1214617, in_queue=1214680, util=98.14% 00:26:42.479 nvme1n1: ios=44/7698, merge=0/0, ticks=53/1213134, in_queue=1213187, util=98.21% 00:26:42.479 nvme2n1: ios=35/6076, merge=0/0, ticks=37/1211603, in_queue=1211640, util=98.27% 00:26:42.479 nvme3n1: ios=20/7706, merge=0/0, ticks=25/1214725, in_queue=1214750, util=98.26% 00:26:42.479 nvme4n1: ios=0/11335, merge=0/0, ticks=0/1214874, in_queue=1214874, util=98.15% 00:26:42.479 nvme5n1: ios=5/11327, merge=0/0, ticks=15/1216885, in_queue=1216900, util=98.36% 00:26:42.479 nvme6n1: ios=0/7678, merge=0/0, ticks=0/1212775, in_queue=1212775, util=98.41% 00:26:42.479 nvme7n1: ios=0/10880, merge=0/0, ticks=0/1217018, in_queue=1217018, util=98.72% 00:26:42.479 nvme8n1: ios=0/6025, merge=0/0, ticks=0/1209727, in_queue=1209727, util=98.70% 00:26:42.479 nvme9n1: ios=0/5940, merge=0/0, ticks=0/1208416, in_queue=1208416, util=98.60% 00:26:42.479 18:33:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:26:42.479 18:33:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:26:42.479 18:33:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:42.479 18:33:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:42.479 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:42.479 18:33:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:26:42.479 18:33:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:42.479 18:33:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:42.479 18:33:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK1 00:26:42.479 18:33:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:42.479 18:33:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK1 00:26:42.479 18:33:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:42.479 18:33:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:42.479 18:33:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:42.479 18:33:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:42.479 18:33:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:42.479 18:33:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:42.479 18:33:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:26:42.479 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:26:42.479 18:33:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:26:42.479 18:33:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:42.479 18:33:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:42.479 18:33:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK2 00:26:42.479 18:33:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:42.479 18:33:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK2 00:26:42.479 18:33:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:42.479 18:33:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:26:42.479 18:33:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:42.479 18:33:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:42.479 18:33:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:42.479 18:33:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:42.479 18:33:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:26:42.479 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:26:42.479 18:33:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:26:42.479 18:33:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:42.479 18:33:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:42.479 18:33:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK3 00:26:42.479 18:33:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:42.479 18:33:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK3 00:26:42.479 18:33:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:42.479 18:33:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:26:42.479 18:33:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:42.479 18:33:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:42.479 18:33:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:42.479 18:33:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:42.479 18:33:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:26:42.479 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:26:42.479 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:26:42.479 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:42.479 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:42.479 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK4 00:26:42.479 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:42.479 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK4 00:26:42.479 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:42.479 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:26:42.479 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:42.479 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:42.479 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:42.479 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:42.479 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:26:42.479 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:26:42.480 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:26:42.480 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:42.480 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:42.480 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK5 00:26:42.480 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK5 00:26:42.480 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:42.480 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:42.480 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:26:42.480 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:42.480 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:42.480 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:42.480 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:42.480 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:26:42.480 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:26:42.480 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:26:42.480 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:42.480 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:42.480 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK6 00:26:42.480 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK6 00:26:42.480 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:42.480 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:42.480 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:26:42.480 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:42.480 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:42.480 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:42.480 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:42.480 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:26:42.480 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:26:42.480 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:26:42.480 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:42.480 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:42.480 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK7 00:26:42.480 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:42.480 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK7 00:26:42.480 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:42.480 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:26:42.480 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:42.480 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:42.480 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:42.480 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:42.480 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:26:42.738 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:26:42.738 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:26:42.738 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:42.738 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:42.738 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK8 00:26:42.738 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:42.738 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK8 00:26:42.738 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:42.738 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:26:42.738 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:42.738 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:42.738 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:42.738 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:42.738 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:26:42.738 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:26:42.738 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:26:42.738 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:42.738 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:42.738 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK9 00:26:42.738 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:42.738 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK9 00:26:42.738 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:42.738 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:26:42.738 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:42.738 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:42.738 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:42.738 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:42.738 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:26:42.738 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:26:42.738 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:26:42.738 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:42.738 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:42.738 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK10 00:26:42.738 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:42.738 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK10 00:26:42.738 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:42.738 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:26:42.738 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:42.738 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:42.738 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:42.738 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:42.738 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:26:42.996 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:26:42.996 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:26:42.996 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:42.996 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:42.996 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK11 00:26:42.996 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK11 00:26:42.996 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:42.996 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:42.996 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:26:42.996 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:42.996 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:42.996 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:42.996 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:26:42.996 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:26:42.996 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:26:42.996 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:42.996 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # sync 00:26:42.996 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:42.996 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@120 -- # set +e 00:26:42.996 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:42.996 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:42.996 rmmod nvme_tcp 00:26:42.996 rmmod nvme_fabrics 00:26:42.996 rmmod nvme_keyring 00:26:42.996 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:42.996 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@124 -- # set -e 00:26:42.996 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@125 -- # return 0 00:26:42.997 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@489 -- # '[' -n 94142 ']' 00:26:42.997 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@490 -- # killprocess 94142 00:26:42.997 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@948 -- # '[' -z 94142 ']' 00:26:42.997 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@952 -- # kill -0 94142 00:26:42.997 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@953 -- # uname 00:26:42.997 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:42.997 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 94142 00:26:42.997 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:42.997 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:42.997 killing process with pid 94142 00:26:42.997 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@966 -- # echo 'killing process with pid 94142' 00:26:42.997 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@967 -- # kill 94142 00:26:42.997 18:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@972 -- # wait 94142 00:26:46.278 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:46.278 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:46.278 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:46.278 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:46.278 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:46.278 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:46.278 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:46.278 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:46.278 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:26:46.278 00:26:46.278 real 0m53.672s 00:26:46.278 user 3m3.159s 00:26:46.278 sys 0m22.930s 00:26:46.278 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:46.278 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:46.278 ************************************ 00:26:46.278 END TEST nvmf_multiconnection 00:26:46.278 ************************************ 00:26:46.278 18:33:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:26:46.278 18:33:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@48 -- # run_test nvmf_initiator_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:26:46.278 18:33:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:46.278 18:33:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:46.569 18:33:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:46.569 ************************************ 00:26:46.569 START TEST nvmf_initiator_timeout 00:26:46.569 ************************************ 00:26:46.569 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:26:46.569 * Looking for test storage... 00:26:46.569 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:26:46.570 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:46.570 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:26:46.570 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:46.570 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:46.570 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:46.570 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:46.570 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:46.570 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:46.570 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:46.570 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:46.570 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:46.570 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:46.570 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:26:46.570 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=0b8484e2-e129-4a11-8748-0b3c728771da 00:26:46.570 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:46.570 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:46.570 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:46.570 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:46.570 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:46.570 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:46.570 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:46.570 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:46.570 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:46.570 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:46.570 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:46.570 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:26:46.570 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:46.570 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@47 -- # : 0 00:26:46.570 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:46.570 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:46.570 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:46.570 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:46.570 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:46.570 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:46.570 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:46.570 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:46.570 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:46.570 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:46.570 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:26:46.570 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:46.570 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:46.570 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:46.570 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:46.570 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:46.570 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:46.570 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:46.570 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:46.570 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:26:46.570 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:26:46.570 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:26:46.570 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:26:46.570 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:26:46.570 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@432 -- # nvmf_veth_init 00:26:46.570 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:46.570 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:46.570 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:26:46.570 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:26:46.570 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:46.570 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:46.570 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:46.570 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:46.570 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:46.570 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:46.570 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:46.570 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:46.570 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:26:46.570 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:26:46.570 Cannot find device "nvmf_tgt_br" 00:26:46.570 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@155 -- # true 00:26:46.570 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:26:46.570 Cannot find device "nvmf_tgt_br2" 00:26:46.570 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@156 -- # true 00:26:46.570 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:26:46.570 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:26:46.570 Cannot find device "nvmf_tgt_br" 00:26:46.570 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@158 -- # true 00:26:46.570 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:26:46.570 Cannot find device "nvmf_tgt_br2" 00:26:46.570 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@159 -- # true 00:26:46.570 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:26:46.570 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:26:46.570 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:46.570 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:46.570 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@162 -- # true 00:26:46.570 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:46.570 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:46.570 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@163 -- # true 00:26:46.570 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:26:46.570 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:46.570 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:46.570 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:46.571 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:46.571 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:46.829 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:46.829 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:26:46.829 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:26:46.829 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:26:46.829 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:26:46.829 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:26:46.829 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:26:46.829 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:46.829 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:46.829 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:46.829 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:26:46.829 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:26:46.829 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:26:46.829 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:46.829 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:46.829 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:46.829 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:46.829 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:26:46.829 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:46.829 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.106 ms 00:26:46.829 00:26:46.829 --- 10.0.0.2 ping statistics --- 00:26:46.829 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:46.829 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:26:46.829 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:26:46.829 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:46.829 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:26:46.829 00:26:46.829 --- 10.0.0.3 ping statistics --- 00:26:46.829 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:46.829 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:26:46.829 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:46.829 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:46.829 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:26:46.829 00:26:46.829 --- 10.0.0.1 ping statistics --- 00:26:46.829 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:46.829 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:26:46.829 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:46.830 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@433 -- # return 0 00:26:46.830 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:46.830 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:46.830 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:46.830 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:46.830 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:46.830 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:46.830 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:46.830 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:26:46.830 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:46.830 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:46.830 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:46.830 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@481 -- # nvmfpid=95246 00:26:46.830 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # waitforlisten 95246 00:26:46.830 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@829 -- # '[' -z 95246 ']' 00:26:46.830 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:46.830 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:46.830 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:46.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:46.830 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:46.830 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:46.830 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:47.088 [2024-07-22 18:33:58.891017] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:26:47.088 [2024-07-22 18:33:58.891242] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:47.088 [2024-07-22 18:33:59.076699] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:47.655 [2024-07-22 18:33:59.386798] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:47.655 [2024-07-22 18:33:59.386920] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:47.655 [2024-07-22 18:33:59.386938] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:47.655 [2024-07-22 18:33:59.386954] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:47.655 [2024-07-22 18:33:59.386966] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:47.655 [2024-07-22 18:33:59.387177] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:47.655 [2024-07-22 18:33:59.388031] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:47.655 [2024-07-22 18:33:59.388148] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:47.655 [2024-07-22 18:33:59.388183] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:47.913 18:33:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:47.913 18:33:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@862 -- # return 0 00:26:47.913 18:33:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:47.913 18:33:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:47.913 18:33:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:47.913 18:33:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:47.913 18:33:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:26:47.913 18:33:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:47.913 18:33:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.913 18:33:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:48.171 Malloc0 00:26:48.171 18:33:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:48.171 18:33:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:26:48.171 18:33:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:48.171 18:33:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:48.171 Delay0 00:26:48.171 18:34:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:48.171 18:34:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:48.171 18:34:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:48.171 18:34:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:48.171 [2024-07-22 18:34:00.010231] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:48.171 18:34:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:48.171 18:34:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:26:48.171 18:34:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:48.171 18:34:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:48.171 18:34:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:48.171 18:34:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:48.171 18:34:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:48.171 18:34:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:48.171 18:34:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:48.171 18:34:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:48.171 18:34:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:48.171 18:34:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:48.171 [2024-07-22 18:34:00.042761] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:48.171 18:34:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:48.172 18:34:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --hostid=0b8484e2-e129-4a11-8748-0b3c728771da -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:26:48.430 18:34:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:26:48.430 18:34:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1198 -- # local i=0 00:26:48.430 18:34:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:48.430 18:34:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:48.430 18:34:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1205 -- # sleep 2 00:26:50.332 18:34:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:50.332 18:34:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:50.332 18:34:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:26:50.332 18:34:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:50.332 18:34:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:50.332 18:34:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # return 0 00:26:50.332 18:34:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=95327 00:26:50.332 18:34:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:26:50.332 18:34:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:26:50.332 [global] 00:26:50.332 thread=1 00:26:50.332 invalidate=1 00:26:50.332 rw=write 00:26:50.332 time_based=1 00:26:50.332 runtime=60 00:26:50.332 ioengine=libaio 00:26:50.332 direct=1 00:26:50.332 bs=4096 00:26:50.332 iodepth=1 00:26:50.332 norandommap=0 00:26:50.332 numjobs=1 00:26:50.332 00:26:50.332 verify_dump=1 00:26:50.332 verify_backlog=512 00:26:50.332 verify_state_save=0 00:26:50.332 do_verify=1 00:26:50.332 verify=crc32c-intel 00:26:50.332 [job0] 00:26:50.332 filename=/dev/nvme0n1 00:26:50.332 Could not set queue depth (nvme0n1) 00:26:50.590 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:26:50.590 fio-3.35 00:26:50.590 Starting 1 thread 00:26:53.874 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:26:53.874 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:53.874 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:53.874 true 00:26:53.874 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:53.874 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:26:53.874 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:53.874 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:53.874 true 00:26:53.874 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:53.874 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:26:53.874 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:53.874 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:53.874 true 00:26:53.874 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:53.874 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:26:53.874 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:53.874 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:53.874 true 00:26:53.874 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:53.874 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:26:56.403 18:34:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:26:56.403 18:34:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:56.403 18:34:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:56.403 true 00:26:56.403 18:34:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:56.403 18:34:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:26:56.403 18:34:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:56.403 18:34:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:56.403 true 00:26:56.403 18:34:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:56.403 18:34:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:26:56.403 18:34:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:56.403 18:34:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:56.403 true 00:26:56.403 18:34:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:56.403 18:34:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:26:56.403 18:34:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:56.403 18:34:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:56.403 true 00:26:56.403 18:34:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:56.403 18:34:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:26:56.403 18:34:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 95327 00:27:52.613 00:27:52.613 job0: (groupid=0, jobs=1): err= 0: pid=95349: Mon Jul 22 18:35:02 2024 00:27:52.613 read: IOPS=640, BW=2560KiB/s (2621kB/s)(150MiB/60000msec) 00:27:52.613 slat (usec): min=12, max=7689, avg=17.72, stdev=53.67 00:27:52.614 clat (usec): min=211, max=40390k, avg=1309.19, stdev=206113.70 00:27:52.614 lat (usec): min=224, max=40390k, avg=1326.91, stdev=206113.78 00:27:52.614 clat percentiles (usec): 00:27:52.614 | 1.00th=[ 225], 5.00th=[ 229], 10.00th=[ 233], 20.00th=[ 237], 00:27:52.614 | 30.00th=[ 241], 40.00th=[ 245], 50.00th=[ 249], 60.00th=[ 255], 00:27:52.614 | 70.00th=[ 265], 80.00th=[ 277], 90.00th=[ 297], 95.00th=[ 314], 00:27:52.614 | 99.00th=[ 347], 99.50th=[ 367], 99.90th=[ 445], 99.95th=[ 603], 00:27:52.614 | 99.99th=[ 979] 00:27:52.614 write: IOPS=645, BW=2583KiB/s (2645kB/s)(151MiB/60000msec); 0 zone resets 00:27:52.614 slat (usec): min=19, max=661, avg=26.06, stdev= 7.98 00:27:52.614 clat (usec): min=158, max=2440, avg=203.48, stdev=28.91 00:27:52.614 lat (usec): min=188, max=2463, avg=229.54, stdev=32.52 00:27:52.614 clat percentiles (usec): 00:27:52.614 | 1.00th=[ 176], 5.00th=[ 180], 10.00th=[ 182], 20.00th=[ 186], 00:27:52.614 | 30.00th=[ 190], 40.00th=[ 192], 50.00th=[ 196], 60.00th=[ 202], 00:27:52.614 | 70.00th=[ 208], 80.00th=[ 221], 90.00th=[ 235], 95.00th=[ 249], 00:27:52.614 | 99.00th=[ 277], 99.50th=[ 289], 99.90th=[ 379], 99.95th=[ 523], 00:27:52.614 | 99.99th=[ 955] 00:27:52.614 bw ( KiB/s): min= 4096, max= 9136, per=100.00%, avg=7768.18, stdev=1143.77, samples=39 00:27:52.614 iops : min= 1024, max= 2284, avg=1942.03, stdev=285.93, samples=39 00:27:52.614 lat (usec) : 250=74.30%, 500=25.64%, 750=0.03%, 1000=0.02% 00:27:52.614 lat (msec) : 2=0.01%, 4=0.01%, >=2000=0.01% 00:27:52.614 cpu : usr=0.58%, sys=2.04%, ctx=77165, majf=0, minf=2 00:27:52.614 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:52.614 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.614 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.614 issued rwts: total=38400,38741,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:52.614 latency : target=0, window=0, percentile=100.00%, depth=1 00:27:52.614 00:27:52.614 Run status group 0 (all jobs): 00:27:52.614 READ: bw=2560KiB/s (2621kB/s), 2560KiB/s-2560KiB/s (2621kB/s-2621kB/s), io=150MiB (157MB), run=60000-60000msec 00:27:52.614 WRITE: bw=2583KiB/s (2645kB/s), 2583KiB/s-2583KiB/s (2645kB/s-2645kB/s), io=151MiB (159MB), run=60000-60000msec 00:27:52.614 00:27:52.614 Disk stats (read/write): 00:27:52.614 nvme0n1: ios=38520/38400, merge=0/0, ticks=10204/8270, in_queue=18474, util=99.91% 00:27:52.614 18:35:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:27:52.614 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:27:52.614 18:35:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:27:52.614 18:35:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1219 -- # local i=0 00:27:52.614 18:35:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:52.614 18:35:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:52.614 18:35:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:52.614 18:35:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:52.614 nvmf hotplug test: fio successful as expected 00:27:52.614 18:35:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # return 0 00:27:52.614 18:35:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:27:52.614 18:35:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:27:52.614 18:35:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:52.614 18:35:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.614 18:35:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:52.614 18:35:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.614 18:35:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:27:52.614 18:35:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:27:52.614 18:35:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:27:52.614 18:35:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:52.614 18:35:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # sync 00:27:52.614 18:35:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:52.614 18:35:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@120 -- # set +e 00:27:52.614 18:35:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:52.614 18:35:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:52.614 rmmod nvme_tcp 00:27:52.614 rmmod nvme_fabrics 00:27:52.614 rmmod nvme_keyring 00:27:52.614 18:35:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:52.614 18:35:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set -e 00:27:52.614 18:35:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # return 0 00:27:52.614 18:35:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@489 -- # '[' -n 95246 ']' 00:27:52.614 18:35:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@490 -- # killprocess 95246 00:27:52.614 18:35:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@948 -- # '[' -z 95246 ']' 00:27:52.614 18:35:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@952 -- # kill -0 95246 00:27:52.614 18:35:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@953 -- # uname 00:27:52.614 18:35:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:52.614 18:35:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 95246 00:27:52.614 killing process with pid 95246 00:27:52.614 18:35:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:52.614 18:35:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:52.614 18:35:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 95246' 00:27:52.614 18:35:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@967 -- # kill 95246 00:27:52.614 18:35:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@972 -- # wait 95246 00:27:52.614 18:35:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:52.614 18:35:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:52.614 18:35:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:52.614 18:35:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:52.614 18:35:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:52.614 18:35:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:52.614 18:35:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:52.614 18:35:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:52.614 18:35:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:27:52.614 ************************************ 00:27:52.614 END TEST nvmf_initiator_timeout 00:27:52.614 ************************************ 00:27:52.614 00:27:52.614 real 1m5.927s 00:27:52.614 user 4m8.502s 00:27:52.614 sys 0m9.034s 00:27:52.614 18:35:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:52.614 18:35:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:52.614 18:35:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:27:52.614 18:35:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@51 -- # [[ virt == phy ]] 00:27:52.614 18:35:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@66 -- # trap - SIGINT SIGTERM EXIT 00:27:52.614 00:27:52.614 real 14m26.089s 00:27:52.614 user 43m28.868s 00:27:52.614 sys 2m22.937s 00:27:52.614 18:35:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:52.614 ************************************ 00:27:52.614 END TEST nvmf_target_extra 00:27:52.614 ************************************ 00:27:52.614 18:35:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:52.614 18:35:04 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:27:52.614 18:35:04 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:27:52.614 18:35:04 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:52.614 18:35:04 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:52.614 18:35:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:52.614 ************************************ 00:27:52.614 START TEST nvmf_host 00:27:52.614 ************************************ 00:27:52.614 18:35:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:27:52.614 * Looking for test storage... 00:27:52.614 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:27:52.614 18:35:04 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:52.614 18:35:04 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:27:52.614 18:35:04 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:52.614 18:35:04 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:52.614 18:35:04 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:52.614 18:35:04 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:52.614 18:35:04 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:52.614 18:35:04 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:52.614 18:35:04 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:52.614 18:35:04 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:52.614 18:35:04 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:52.614 18:35:04 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:52.614 18:35:04 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:27:52.615 18:35:04 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=0b8484e2-e129-4a11-8748-0b3c728771da 00:27:52.615 18:35:04 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:52.615 18:35:04 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:52.615 18:35:04 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:52.615 18:35:04 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:52.615 18:35:04 nvmf_tcp.nvmf_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:52.615 18:35:04 nvmf_tcp.nvmf_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:52.615 18:35:04 nvmf_tcp.nvmf_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:52.615 18:35:04 nvmf_tcp.nvmf_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:52.615 18:35:04 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:52.615 18:35:04 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:52.615 18:35:04 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:52.615 18:35:04 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:27:52.615 18:35:04 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:52.615 18:35:04 nvmf_tcp.nvmf_host -- nvmf/common.sh@47 -- # : 0 00:27:52.615 18:35:04 nvmf_tcp.nvmf_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:52.615 18:35:04 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:52.615 18:35:04 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:52.615 18:35:04 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:52.615 18:35:04 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:52.615 18:35:04 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:52.615 18:35:04 nvmf_tcp.nvmf_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:52.615 18:35:04 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:52.615 18:35:04 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:27:52.615 18:35:04 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:27:52.615 18:35:04 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:27:52.615 18:35:04 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:27:52.615 18:35:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:52.615 18:35:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:52.615 18:35:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.615 ************************************ 00:27:52.615 START TEST nvmf_multicontroller 00:27:52.615 ************************************ 00:27:52.615 18:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:27:52.615 * Looking for test storage... 00:27:52.615 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:27:52.615 18:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:52.615 18:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:27:52.615 18:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:52.615 18:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:52.615 18:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:52.615 18:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:52.615 18:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:52.615 18:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:52.615 18:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:52.615 18:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:52.615 18:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:52.615 18:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:52.615 18:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:27:52.615 18:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=0b8484e2-e129-4a11-8748-0b3c728771da 00:27:52.615 18:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:52.615 18:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:52.615 18:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:52.615 18:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:52.615 18:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:52.615 18:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:52.615 18:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:52.615 18:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:52.615 18:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:52.615 18:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:52.615 18:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:52.615 18:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:27:52.615 18:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:52.615 18:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:27:52.615 18:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:52.615 18:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:52.615 18:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:52.615 18:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:52.615 18:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:52.615 18:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:52.615 18:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:52.615 18:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:52.615 18:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:52.615 18:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:52.615 18:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:27:52.615 18:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:27:52.615 18:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:52.615 18:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:27:52.615 18:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:27:52.615 18:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:52.615 18:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:52.615 18:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:52.615 18:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:52.615 18:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:52.616 18:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:52.616 18:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:52.616 18:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:52.616 18:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:27:52.616 18:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:27:52.616 18:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:27:52.616 18:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:27:52.616 18:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:27:52.616 18:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # nvmf_veth_init 00:27:52.616 18:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:52.616 18:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:52.616 18:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:27:52.616 18:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:27:52.616 18:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:27:52.616 18:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:27:52.616 18:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:27:52.616 18:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:52.616 18:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:27:52.616 18:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:27:52.616 18:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:27:52.616 18:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:27:52.616 18:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:27:52.616 18:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:27:52.616 Cannot find device "nvmf_tgt_br" 00:27:52.616 18:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@155 -- # true 00:27:52.616 18:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:27:52.616 Cannot find device "nvmf_tgt_br2" 00:27:52.616 18:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@156 -- # true 00:27:52.616 18:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:27:52.616 18:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:27:52.616 Cannot find device "nvmf_tgt_br" 00:27:52.616 18:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@158 -- # true 00:27:52.616 18:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:27:52.874 Cannot find device "nvmf_tgt_br2" 00:27:52.874 18:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@159 -- # true 00:27:52.874 18:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:27:52.874 18:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:27:52.874 18:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:52.875 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:52.875 18:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@162 -- # true 00:27:52.875 18:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:52.875 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:52.875 18:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@163 -- # true 00:27:52.875 18:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:27:52.875 18:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:27:52.875 18:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:27:52.875 18:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:27:52.875 18:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:27:52.875 18:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:27:52.875 18:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:27:52.875 18:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:27:52.875 18:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:27:52.875 18:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:27:52.875 18:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:27:52.875 18:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:27:52.875 18:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:27:52.875 18:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:27:52.875 18:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:27:52.875 18:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:27:52.875 18:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:27:52.875 18:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:27:52.875 18:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:27:52.875 18:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:27:52.875 18:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:27:52.875 18:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:27:52.875 18:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:52.875 18:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:27:52.875 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:52.875 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.077 ms 00:27:52.875 00:27:52.875 --- 10.0.0.2 ping statistics --- 00:27:52.875 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:52.875 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:27:52.875 18:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:27:53.133 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:53.133 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:27:53.133 00:27:53.133 --- 10.0.0.3 ping statistics --- 00:27:53.133 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:53.133 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:27:53.133 18:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:27:53.133 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:53.133 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:27:53.133 00:27:53.133 --- 10.0.0.1 ping statistics --- 00:27:53.133 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:53.133 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:27:53.133 18:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:53.133 18:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@433 -- # return 0 00:27:53.133 18:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:53.133 18:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:53.133 18:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:53.133 18:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:53.133 18:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:53.133 18:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:53.133 18:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:53.133 18:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:27:53.133 18:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:53.133 18:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:53.133 18:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:53.133 18:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=96192 00:27:53.133 18:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:53.133 18:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 96192 00:27:53.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:53.133 18:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 96192 ']' 00:27:53.133 18:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:53.133 18:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:53.133 18:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:53.133 18:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:53.133 18:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:53.133 [2024-07-22 18:35:05.048049] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:27:53.133 [2024-07-22 18:35:05.048480] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:53.391 [2024-07-22 18:35:05.226301] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:53.648 [2024-07-22 18:35:05.545508] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:53.649 [2024-07-22 18:35:05.546009] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:53.649 [2024-07-22 18:35:05.546040] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:53.649 [2024-07-22 18:35:05.546068] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:53.649 [2024-07-22 18:35:05.546088] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:53.649 [2024-07-22 18:35:05.546619] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:53.649 [2024-07-22 18:35:05.546756] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:53.649 [2024-07-22 18:35:05.546763] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:54.215 18:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:54.215 18:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:27:54.215 18:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:54.215 18:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:54.215 18:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:54.215 18:35:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:54.215 18:35:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:54.215 18:35:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.215 18:35:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:54.215 [2024-07-22 18:35:06.027175] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:54.215 18:35:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.215 18:35:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:54.215 18:35:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.215 18:35:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:54.215 Malloc0 00:27:54.215 18:35:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.215 18:35:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:54.215 18:35:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.215 18:35:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:54.215 18:35:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.215 18:35:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:54.216 18:35:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.216 18:35:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:54.216 18:35:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.216 18:35:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:54.216 18:35:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.216 18:35:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:54.216 [2024-07-22 18:35:06.171877] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:54.216 18:35:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.216 18:35:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:54.216 18:35:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.216 18:35:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:54.216 [2024-07-22 18:35:06.179647] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:54.216 18:35:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.216 18:35:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:54.216 18:35:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.216 18:35:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:54.474 Malloc1 00:27:54.474 18:35:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.474 18:35:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:27:54.474 18:35:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.474 18:35:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:54.475 18:35:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.475 18:35:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:27:54.475 18:35:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.475 18:35:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:54.475 18:35:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.475 18:35:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:27:54.475 18:35:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.475 18:35:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:54.475 18:35:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.475 18:35:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:27:54.475 18:35:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.475 18:35:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:54.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:54.475 18:35:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.475 18:35:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=96245 00:27:54.475 18:35:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:27:54.475 18:35:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:54.475 18:35:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 96245 /var/tmp/bdevperf.sock 00:27:54.475 18:35:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 96245 ']' 00:27:54.475 18:35:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:54.475 18:35:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:54.475 18:35:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:54.475 18:35:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:54.475 18:35:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:55.410 18:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:55.410 18:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:27:55.410 18:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:27:55.411 18:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.411 18:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:55.670 NVMe0n1 00:27:55.670 18:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.670 18:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:55.670 18:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.670 18:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:55.670 18:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:27:55.670 18:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.670 1 00:27:55.670 18:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:27:55.670 18:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:27:55.670 18:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:27:55.670 18:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:55.670 18:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:55.670 18:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:55.670 18:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:55.670 18:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:27:55.670 18:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.670 18:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:55.670 2024/07/22 18:35:07 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.2 hostnqn:nqn.2021-09-7.io.spdk:00001 hostsvcid:60000 name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:27:55.670 request: 00:27:55.670 { 00:27:55.670 "method": "bdev_nvme_attach_controller", 00:27:55.670 "params": { 00:27:55.670 "name": "NVMe0", 00:27:55.670 "trtype": "tcp", 00:27:55.670 "traddr": "10.0.0.2", 00:27:55.670 "adrfam": "ipv4", 00:27:55.670 "trsvcid": "4420", 00:27:55.670 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:55.670 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:27:55.670 "hostaddr": "10.0.0.2", 00:27:55.670 "hostsvcid": "60000", 00:27:55.670 "prchk_reftag": false, 00:27:55.670 "prchk_guard": false, 00:27:55.670 "hdgst": false, 00:27:55.670 "ddgst": false 00:27:55.670 } 00:27:55.670 } 00:27:55.670 Got JSON-RPC error response 00:27:55.670 GoRPCClient: error on JSON-RPC call 00:27:55.670 18:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:55.670 18:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:27:55.670 18:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:55.670 18:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:55.670 18:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:55.670 18:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:27:55.670 18:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:27:55.670 18:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:27:55.670 18:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:55.670 18:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:55.670 18:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:55.670 18:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:55.670 18:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:27:55.670 18:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.670 18:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:55.670 2024/07/22 18:35:07 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.2 hostsvcid:60000 name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:27:55.670 request: 00:27:55.670 { 00:27:55.670 "method": "bdev_nvme_attach_controller", 00:27:55.670 "params": { 00:27:55.670 "name": "NVMe0", 00:27:55.670 "trtype": "tcp", 00:27:55.670 "traddr": "10.0.0.2", 00:27:55.670 "adrfam": "ipv4", 00:27:55.670 "trsvcid": "4420", 00:27:55.670 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:55.670 "hostaddr": "10.0.0.2", 00:27:55.670 "hostsvcid": "60000", 00:27:55.670 "prchk_reftag": false, 00:27:55.670 "prchk_guard": false, 00:27:55.670 "hdgst": false, 00:27:55.670 "ddgst": false 00:27:55.670 } 00:27:55.670 } 00:27:55.670 Got JSON-RPC error response 00:27:55.670 GoRPCClient: error on JSON-RPC call 00:27:55.670 18:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:55.670 18:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:27:55.670 18:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:55.670 18:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:55.670 18:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:55.670 18:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:27:55.670 18:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:27:55.670 18:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:27:55.670 18:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:55.670 18:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:55.670 18:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:55.670 18:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:55.670 18:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:27:55.670 18:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.670 18:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:55.670 2024/07/22 18:35:07 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.2 hostsvcid:60000 multipath:disable name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists and multipath is disabled 00:27:55.670 request: 00:27:55.670 { 00:27:55.670 "method": "bdev_nvme_attach_controller", 00:27:55.670 "params": { 00:27:55.670 "name": "NVMe0", 00:27:55.670 "trtype": "tcp", 00:27:55.670 "traddr": "10.0.0.2", 00:27:55.670 "adrfam": "ipv4", 00:27:55.670 "trsvcid": "4420", 00:27:55.670 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:55.670 "hostaddr": "10.0.0.2", 00:27:55.670 "hostsvcid": "60000", 00:27:55.670 "prchk_reftag": false, 00:27:55.670 "prchk_guard": false, 00:27:55.670 "hdgst": false, 00:27:55.670 "ddgst": false, 00:27:55.670 "multipath": "disable" 00:27:55.670 } 00:27:55.670 } 00:27:55.670 Got JSON-RPC error response 00:27:55.670 GoRPCClient: error on JSON-RPC call 00:27:55.671 18:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:55.671 18:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:27:55.671 18:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:55.671 18:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:55.671 18:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:55.671 18:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:27:55.671 18:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:27:55.671 18:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:27:55.671 18:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:55.671 18:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:55.671 18:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:55.671 18:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:55.671 18:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:27:55.671 18:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.671 18:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:55.671 2024/07/22 18:35:07 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.2 hostsvcid:60000 multipath:failover name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:27:55.671 request: 00:27:55.671 { 00:27:55.671 "method": "bdev_nvme_attach_controller", 00:27:55.671 "params": { 00:27:55.671 "name": "NVMe0", 00:27:55.671 "trtype": "tcp", 00:27:55.671 "traddr": "10.0.0.2", 00:27:55.671 "adrfam": "ipv4", 00:27:55.671 "trsvcid": "4420", 00:27:55.671 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:55.671 "hostaddr": "10.0.0.2", 00:27:55.671 "hostsvcid": "60000", 00:27:55.671 "prchk_reftag": false, 00:27:55.671 "prchk_guard": false, 00:27:55.671 "hdgst": false, 00:27:55.671 "ddgst": false, 00:27:55.671 "multipath": "failover" 00:27:55.671 } 00:27:55.671 } 00:27:55.671 Got JSON-RPC error response 00:27:55.671 GoRPCClient: error on JSON-RPC call 00:27:55.671 18:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:55.671 18:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:27:55.671 18:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:55.671 18:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:55.671 18:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:55.671 18:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:55.671 18:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.671 18:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:55.671 00:27:55.671 18:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.671 18:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:55.671 18:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.671 18:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:55.671 18:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.671 18:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:27:55.671 18:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.671 18:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:55.929 00:27:55.929 18:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.929 18:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:55.929 18:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:27:55.929 18:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.929 18:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:55.929 18:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.929 18:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:27:55.929 18:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:56.877 0 00:27:56.877 18:35:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:27:56.877 18:35:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.877 18:35:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:57.146 18:35:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.146 18:35:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 96245 00:27:57.146 18:35:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 96245 ']' 00:27:57.146 18:35:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 96245 00:27:57.146 18:35:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:27:57.146 18:35:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:57.146 18:35:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 96245 00:27:57.146 killing process with pid 96245 00:27:57.146 18:35:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:57.146 18:35:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:57.146 18:35:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 96245' 00:27:57.146 18:35:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 96245 00:27:57.146 18:35:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 96245 00:27:58.522 18:35:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:58.522 18:35:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:58.522 18:35:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:58.522 18:35:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:58.522 18:35:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:27:58.522 18:35:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:58.522 18:35:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:58.522 18:35:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:58.522 18:35:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:27:58.522 18:35:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:27:58.522 18:35:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:27:58.522 18:35:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt -type f 00:27:58.522 18:35:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:27:58.522 18:35:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:27:58.522 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:27:58.522 [2024-07-22 18:35:06.453005] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:27:58.522 [2024-07-22 18:35:06.453243] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96245 ] 00:27:58.522 [2024-07-22 18:35:06.632442] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:58.522 [2024-07-22 18:35:06.928582] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:58.522 [2024-07-22 18:35:07.688524] bdev.c:4633:bdev_name_add: *ERROR*: Bdev name 13a535f9-2f06-47d6-9395-29c5d015444f already exists 00:27:58.522 [2024-07-22 18:35:07.688642] bdev.c:7755:bdev_register: *ERROR*: Unable to add uuid:13a535f9-2f06-47d6-9395-29c5d015444f alias for bdev NVMe1n1 00:27:58.522 [2024-07-22 18:35:07.688674] bdev_nvme.c:4318:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:27:58.522 Running I/O for 1 seconds... 00:27:58.522 00:27:58.522 Latency(us) 00:27:58.522 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:58.522 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:27:58.522 NVMe0n1 : 1.00 14470.25 56.52 0.00 0.00 8830.24 5153.51 17039.36 00:27:58.522 =================================================================================================================== 00:27:58.522 Total : 14470.25 56.52 0.00 0.00 8830.24 5153.51 17039.36 00:27:58.522 Received shutdown signal, test time was about 1.000000 seconds 00:27:58.522 00:27:58.522 Latency(us) 00:27:58.522 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:58.522 =================================================================================================================== 00:27:58.522 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:58.523 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:27:58.523 18:35:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:27:58.523 18:35:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:27:58.523 18:35:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:27:58.523 18:35:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:58.523 18:35:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:27:58.523 18:35:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:58.523 18:35:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:27:58.523 18:35:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:58.523 18:35:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:58.523 rmmod nvme_tcp 00:27:58.523 rmmod nvme_fabrics 00:27:58.523 rmmod nvme_keyring 00:27:58.523 18:35:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:58.523 18:35:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:27:58.523 18:35:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:27:58.523 18:35:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 96192 ']' 00:27:58.523 18:35:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 96192 00:27:58.523 18:35:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 96192 ']' 00:27:58.523 18:35:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 96192 00:27:58.523 18:35:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:27:58.523 18:35:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:58.523 18:35:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 96192 00:27:58.523 killing process with pid 96192 00:27:58.523 18:35:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:58.523 18:35:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:58.523 18:35:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 96192' 00:27:58.523 18:35:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 96192 00:27:58.523 18:35:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 96192 00:28:00.424 18:35:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:00.424 18:35:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:00.424 18:35:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:00.424 18:35:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:00.424 18:35:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:00.425 18:35:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:00.425 18:35:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:00.425 18:35:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:00.425 18:35:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:28:00.425 00:28:00.425 real 0m7.641s 00:28:00.425 user 0m22.859s 00:28:00.425 sys 0m1.535s 00:28:00.425 18:35:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:00.425 18:35:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:00.425 ************************************ 00:28:00.425 END TEST nvmf_multicontroller 00:28:00.425 ************************************ 00:28:00.425 18:35:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:28:00.425 18:35:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:28:00.425 18:35:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:00.425 18:35:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:00.425 18:35:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.425 ************************************ 00:28:00.425 START TEST nvmf_aer 00:28:00.425 ************************************ 00:28:00.425 18:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:28:00.425 * Looking for test storage... 00:28:00.425 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:28:00.425 18:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:00.425 18:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:28:00.425 18:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:00.425 18:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:00.425 18:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:00.425 18:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:00.425 18:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:00.425 18:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:00.425 18:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:00.425 18:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:00.425 18:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:00.425 18:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:00.425 18:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:28:00.425 18:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=0b8484e2-e129-4a11-8748-0b3c728771da 00:28:00.425 18:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:00.425 18:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:00.425 18:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:28:00.425 18:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:00.425 18:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:00.425 18:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:00.425 18:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:00.425 18:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:00.425 18:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:00.425 18:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:00.425 18:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:00.425 18:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:28:00.425 18:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:00.425 18:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:28:00.425 18:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:00.425 18:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:00.425 18:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:00.425 18:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:00.425 18:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:00.425 18:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:00.425 18:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:00.425 18:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:00.425 18:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:28:00.425 18:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:00.425 18:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:00.425 18:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:00.425 18:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:00.425 18:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:00.425 18:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:00.425 18:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:00.425 18:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:00.425 18:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:28:00.425 18:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:28:00.425 18:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:28:00.425 18:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:28:00.425 18:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:28:00.425 18:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # nvmf_veth_init 00:28:00.425 18:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:00.425 18:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:00.425 18:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:28:00.425 18:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:28:00.425 18:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:28:00.425 18:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:28:00.425 18:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:28:00.425 18:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:00.425 18:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:28:00.425 18:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:28:00.425 18:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:28:00.425 18:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:28:00.425 18:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:28:00.425 18:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:28:00.425 Cannot find device "nvmf_tgt_br" 00:28:00.425 18:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@155 -- # true 00:28:00.425 18:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:28:00.425 Cannot find device "nvmf_tgt_br2" 00:28:00.425 18:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@156 -- # true 00:28:00.425 18:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:28:00.425 18:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:28:00.425 Cannot find device "nvmf_tgt_br" 00:28:00.425 18:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@158 -- # true 00:28:00.425 18:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:28:00.425 Cannot find device "nvmf_tgt_br2" 00:28:00.425 18:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@159 -- # true 00:28:00.425 18:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:28:00.425 18:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:28:00.425 18:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:00.425 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:00.425 18:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@162 -- # true 00:28:00.425 18:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:00.425 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:00.425 18:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@163 -- # true 00:28:00.425 18:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:28:00.426 18:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:28:00.426 18:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:28:00.426 18:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:28:00.426 18:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:28:00.426 18:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:28:00.426 18:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:28:00.426 18:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:28:00.684 18:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:28:00.684 18:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:28:00.684 18:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:28:00.684 18:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:28:00.684 18:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:28:00.684 18:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:28:00.684 18:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:28:00.684 18:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:28:00.684 18:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:28:00.684 18:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:28:00.684 18:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:28:00.684 18:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:28:00.684 18:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:28:00.684 18:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:28:00.684 18:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:28:00.684 18:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:28:00.684 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:00.684 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.077 ms 00:28:00.684 00:28:00.684 --- 10.0.0.2 ping statistics --- 00:28:00.684 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:00.684 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:28:00.684 18:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:28:00.684 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:28:00.685 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:28:00.685 00:28:00.685 --- 10.0.0.3 ping statistics --- 00:28:00.685 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:00.685 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:28:00.685 18:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:28:00.685 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:00.685 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:28:00.685 00:28:00.685 --- 10.0.0.1 ping statistics --- 00:28:00.685 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:00.685 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:28:00.685 18:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:00.685 18:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@433 -- # return 0 00:28:00.685 18:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:00.685 18:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:00.685 18:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:00.685 18:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:00.685 18:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:00.685 18:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:00.685 18:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:00.685 18:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:28:00.685 18:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:00.685 18:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:00.685 18:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:00.685 18:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=96516 00:28:00.685 18:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:00.685 18:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 96516 00:28:00.685 18:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@829 -- # '[' -z 96516 ']' 00:28:00.685 18:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:00.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:00.685 18:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:00.685 18:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:00.685 18:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:00.685 18:35:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:00.943 [2024-07-22 18:35:12.739912] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:28:00.943 [2024-07-22 18:35:12.740107] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:00.943 [2024-07-22 18:35:12.911132] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:01.202 [2024-07-22 18:35:13.209917] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:01.202 [2024-07-22 18:35:13.210018] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:01.202 [2024-07-22 18:35:13.210035] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:01.202 [2024-07-22 18:35:13.210075] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:01.202 [2024-07-22 18:35:13.210089] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:01.202 [2024-07-22 18:35:13.210488] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:01.202 [2024-07-22 18:35:13.210774] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:01.202 [2024-07-22 18:35:13.210910] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:01.202 [2024-07-22 18:35:13.210925] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:01.769 18:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:01.769 18:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@862 -- # return 0 00:28:01.769 18:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:01.769 18:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:01.769 18:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:01.769 18:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:01.770 18:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:01.770 18:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.770 18:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:01.770 [2024-07-22 18:35:13.716512] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:01.770 18:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.770 18:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:28:01.770 18:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.770 18:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:02.048 Malloc0 00:28:02.048 18:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.048 18:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:28:02.048 18:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.048 18:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:02.048 18:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.048 18:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:02.048 18:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.048 18:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:02.048 18:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.048 18:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:02.048 18:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.048 18:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:02.048 [2024-07-22 18:35:13.836346] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:02.048 18:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.048 18:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:28:02.048 18:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.048 18:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:02.048 [ 00:28:02.048 { 00:28:02.048 "allow_any_host": true, 00:28:02.048 "hosts": [], 00:28:02.048 "listen_addresses": [], 00:28:02.048 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:02.048 "subtype": "Discovery" 00:28:02.048 }, 00:28:02.048 { 00:28:02.048 "allow_any_host": true, 00:28:02.048 "hosts": [], 00:28:02.048 "listen_addresses": [ 00:28:02.048 { 00:28:02.048 "adrfam": "IPv4", 00:28:02.048 "traddr": "10.0.0.2", 00:28:02.048 "trsvcid": "4420", 00:28:02.048 "trtype": "TCP" 00:28:02.048 } 00:28:02.048 ], 00:28:02.048 "max_cntlid": 65519, 00:28:02.048 "max_namespaces": 2, 00:28:02.048 "min_cntlid": 1, 00:28:02.048 "model_number": "SPDK bdev Controller", 00:28:02.048 "namespaces": [ 00:28:02.048 { 00:28:02.048 "bdev_name": "Malloc0", 00:28:02.048 "name": "Malloc0", 00:28:02.048 "nguid": "C2EA847209AB42CF875FC26F3A0C97EF", 00:28:02.048 "nsid": 1, 00:28:02.048 "uuid": "c2ea8472-09ab-42cf-875f-c26f3a0c97ef" 00:28:02.048 } 00:28:02.048 ], 00:28:02.048 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:02.048 "serial_number": "SPDK00000000000001", 00:28:02.049 "subtype": "NVMe" 00:28:02.049 } 00:28:02.049 ] 00:28:02.049 18:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.049 18:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:28:02.049 18:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:28:02.049 18:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=96570 00:28:02.049 18:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:28:02.049 18:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:28:02.049 18:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:28:02.049 18:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:02.049 18:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:28:02.049 18:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:28:02.049 18:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:28:02.049 18:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:02.049 18:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:28:02.049 18:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:28:02.049 18:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:28:02.307 18:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:02.307 18:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 2 -lt 200 ']' 00:28:02.307 18:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=3 00:28:02.307 18:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:28:02.307 18:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:02.307 18:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:02.307 18:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:28:02.307 18:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:28:02.307 18:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.307 18:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:02.566 Malloc1 00:28:02.566 18:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.566 18:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:28:02.566 18:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.566 18:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:02.566 18:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.566 18:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:28:02.566 18:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.566 18:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:02.566 [ 00:28:02.566 { 00:28:02.566 "allow_any_host": true, 00:28:02.566 "hosts": [], 00:28:02.566 "listen_addresses": [], 00:28:02.566 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:02.566 "subtype": "Discovery" 00:28:02.566 }, 00:28:02.566 { 00:28:02.566 "allow_any_host": true, 00:28:02.566 "hosts": [], 00:28:02.566 "listen_addresses": [ 00:28:02.566 { 00:28:02.566 "adrfam": "IPv4", 00:28:02.566 "traddr": "10.0.0.2", 00:28:02.566 "trsvcid": "4420", 00:28:02.566 "trtype": "TCP" 00:28:02.566 } 00:28:02.566 ], 00:28:02.566 "max_cntlid": 65519, 00:28:02.566 "max_namespaces": 2, 00:28:02.566 "min_cntlid": 1, 00:28:02.566 "model_number": "SPDK bdev Controller", 00:28:02.566 "namespaces": [ 00:28:02.566 { 00:28:02.566 "bdev_name": "Malloc0", 00:28:02.566 "name": "Malloc0", 00:28:02.566 "nguid": "C2EA847209AB42CF875FC26F3A0C97EF", 00:28:02.566 "nsid": 1, 00:28:02.566 "uuid": "c2ea8472-09ab-42cf-875f-c26f3a0c97ef" 00:28:02.566 }, 00:28:02.566 { 00:28:02.566 "bdev_name": "Malloc1", 00:28:02.566 "name": "Malloc1", 00:28:02.566 "nguid": "6584C5876A5540DBBDBA303E52317E25", 00:28:02.566 "nsid": 2, 00:28:02.566 "uuid": "6584c587-6a55-40db-bdba-303e52317e25" 00:28:02.566 } 00:28:02.566 ], 00:28:02.566 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:02.566 "serial_number": "SPDK00000000000001", 00:28:02.566 "subtype": "NVMe" 00:28:02.566 } 00:28:02.566 ] 00:28:02.566 18:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.566 18:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 96570 00:28:02.566 Asynchronous Event Request test 00:28:02.566 Attaching to 10.0.0.2 00:28:02.566 Attached to 10.0.0.2 00:28:02.566 Registering asynchronous event callbacks... 00:28:02.566 Starting namespace attribute notice tests for all controllers... 00:28:02.566 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:28:02.566 aer_cb - Changed Namespace 00:28:02.566 Cleaning up... 00:28:02.566 18:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:28:02.566 18:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.566 18:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:02.824 18:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.824 18:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:28:02.824 18:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.824 18:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:02.824 18:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.824 18:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:02.824 18:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.825 18:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:02.825 18:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.825 18:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:28:02.825 18:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:28:02.825 18:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:02.825 18:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:28:03.082 18:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:03.082 18:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:28:03.082 18:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:03.082 18:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:03.082 rmmod nvme_tcp 00:28:03.082 rmmod nvme_fabrics 00:28:03.082 rmmod nvme_keyring 00:28:03.082 18:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:03.082 18:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:28:03.082 18:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:28:03.082 18:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 96516 ']' 00:28:03.082 18:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 96516 00:28:03.082 18:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@948 -- # '[' -z 96516 ']' 00:28:03.082 18:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@952 -- # kill -0 96516 00:28:03.082 18:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@953 -- # uname 00:28:03.082 18:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:03.082 18:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 96516 00:28:03.082 18:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:03.082 18:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:03.082 killing process with pid 96516 00:28:03.082 18:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@966 -- # echo 'killing process with pid 96516' 00:28:03.082 18:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@967 -- # kill 96516 00:28:03.082 18:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # wait 96516 00:28:04.458 18:35:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:04.458 18:35:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:04.458 18:35:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:04.458 18:35:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:04.458 18:35:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:04.458 18:35:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:04.458 18:35:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:04.458 18:35:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:04.458 18:35:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:28:04.458 ************************************ 00:28:04.458 END TEST nvmf_aer 00:28:04.458 ************************************ 00:28:04.458 00:28:04.458 real 0m4.209s 00:28:04.458 user 0m11.200s 00:28:04.458 sys 0m1.010s 00:28:04.458 18:35:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:04.458 18:35:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:04.458 18:35:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:28:04.458 18:35:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:28:04.458 18:35:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:04.458 18:35:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:04.458 18:35:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.458 ************************************ 00:28:04.459 START TEST nvmf_async_init 00:28:04.459 ************************************ 00:28:04.459 18:35:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:28:04.459 * Looking for test storage... 00:28:04.459 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:28:04.459 18:35:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:04.459 18:35:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:28:04.718 18:35:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:04.718 18:35:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:04.718 18:35:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:04.718 18:35:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:04.718 18:35:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:04.718 18:35:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:04.718 18:35:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:04.718 18:35:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:04.718 18:35:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:04.718 18:35:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:04.718 18:35:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:28:04.718 18:35:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=0b8484e2-e129-4a11-8748-0b3c728771da 00:28:04.718 18:35:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:04.718 18:35:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:04.718 18:35:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:28:04.718 18:35:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:04.718 18:35:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:04.718 18:35:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:04.718 18:35:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:04.718 18:35:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:04.718 18:35:16 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:04.718 18:35:16 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:04.718 18:35:16 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:04.718 18:35:16 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:28:04.718 18:35:16 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:04.718 18:35:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:28:04.718 18:35:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:04.718 18:35:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:04.718 18:35:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:04.718 18:35:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:04.718 18:35:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:04.718 18:35:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:04.718 18:35:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:04.718 18:35:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:04.718 18:35:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:28:04.718 18:35:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:28:04.718 18:35:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:28:04.718 18:35:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:28:04.718 18:35:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:28:04.718 18:35:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:28:04.718 18:35:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=fdd5fdccffdd48b4bd1d308b73ca6101 00:28:04.718 18:35:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:28:04.718 18:35:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:04.718 18:35:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:04.718 18:35:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:04.718 18:35:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:04.718 18:35:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:04.718 18:35:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:04.718 18:35:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:04.718 18:35:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:04.718 18:35:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:28:04.718 18:35:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:28:04.718 18:35:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:28:04.718 18:35:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:28:04.718 18:35:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:28:04.718 18:35:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # nvmf_veth_init 00:28:04.718 18:35:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:04.718 18:35:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:04.718 18:35:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:28:04.718 18:35:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:28:04.718 18:35:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:28:04.718 18:35:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:28:04.718 18:35:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:28:04.718 18:35:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:04.718 18:35:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:28:04.718 18:35:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:28:04.718 18:35:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:28:04.718 18:35:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:28:04.718 18:35:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:28:04.718 18:35:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:28:04.718 Cannot find device "nvmf_tgt_br" 00:28:04.719 18:35:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@155 -- # true 00:28:04.719 18:35:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:28:04.719 Cannot find device "nvmf_tgt_br2" 00:28:04.719 18:35:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@156 -- # true 00:28:04.719 18:35:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:28:04.719 18:35:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:28:04.719 Cannot find device "nvmf_tgt_br" 00:28:04.719 18:35:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@158 -- # true 00:28:04.719 18:35:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:28:04.719 Cannot find device "nvmf_tgt_br2" 00:28:04.719 18:35:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@159 -- # true 00:28:04.719 18:35:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:28:04.719 18:35:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:28:04.719 18:35:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:04.719 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:04.719 18:35:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@162 -- # true 00:28:04.719 18:35:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:04.719 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:04.719 18:35:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@163 -- # true 00:28:04.719 18:35:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:28:04.719 18:35:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:28:04.719 18:35:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:28:04.719 18:35:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:28:04.719 18:35:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:28:04.719 18:35:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:28:04.719 18:35:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:28:04.719 18:35:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:28:04.719 18:35:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:28:04.719 18:35:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:28:04.719 18:35:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:28:04.719 18:35:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:28:04.719 18:35:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:28:04.719 18:35:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:28:04.978 18:35:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:28:04.978 18:35:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:28:04.978 18:35:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:28:04.978 18:35:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:28:04.978 18:35:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:28:04.978 18:35:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:28:04.978 18:35:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:28:04.979 18:35:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:28:04.979 18:35:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:28:04.979 18:35:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:28:04.979 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:04.979 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:28:04.979 00:28:04.979 --- 10.0.0.2 ping statistics --- 00:28:04.979 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:04.979 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:28:04.979 18:35:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:28:04.979 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:28:04.979 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.037 ms 00:28:04.979 00:28:04.979 --- 10.0.0.3 ping statistics --- 00:28:04.979 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:04.979 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:28:04.979 18:35:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:28:04.979 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:04.979 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:28:04.979 00:28:04.979 --- 10.0.0.1 ping statistics --- 00:28:04.979 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:04.979 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:28:04.979 18:35:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:04.979 18:35:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@433 -- # return 0 00:28:04.979 18:35:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:04.979 18:35:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:04.979 18:35:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:04.979 18:35:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:04.979 18:35:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:04.979 18:35:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:04.979 18:35:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:04.979 18:35:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:28:04.979 18:35:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:04.979 18:35:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:04.979 18:35:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:04.979 18:35:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=96756 00:28:04.979 18:35:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:28:04.979 18:35:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 96756 00:28:04.979 18:35:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@829 -- # '[' -z 96756 ']' 00:28:04.979 18:35:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:04.979 18:35:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:04.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:04.979 18:35:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:04.979 18:35:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:04.979 18:35:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:04.979 [2024-07-22 18:35:16.985376] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:28:04.979 [2024-07-22 18:35:16.985577] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:05.237 [2024-07-22 18:35:17.159898] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:05.495 [2024-07-22 18:35:17.438440] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:05.495 [2024-07-22 18:35:17.438518] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:05.495 [2024-07-22 18:35:17.438536] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:05.495 [2024-07-22 18:35:17.438553] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:05.495 [2024-07-22 18:35:17.438564] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:05.495 [2024-07-22 18:35:17.438619] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:06.061 18:35:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:06.061 18:35:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@862 -- # return 0 00:28:06.061 18:35:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:06.061 18:35:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:06.061 18:35:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:06.061 18:35:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:06.061 18:35:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:28:06.061 18:35:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.061 18:35:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:06.061 [2024-07-22 18:35:18.033003] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:06.061 18:35:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.061 18:35:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:28:06.061 18:35:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.061 18:35:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:06.061 null0 00:28:06.061 18:35:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.061 18:35:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:28:06.061 18:35:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.061 18:35:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:06.061 18:35:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.061 18:35:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:28:06.061 18:35:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.061 18:35:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:06.061 18:35:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.061 18:35:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g fdd5fdccffdd48b4bd1d308b73ca6101 00:28:06.061 18:35:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.061 18:35:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:06.061 18:35:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.319 18:35:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:06.319 18:35:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.319 18:35:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:06.319 [2024-07-22 18:35:18.085314] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:06.319 18:35:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.319 18:35:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:28:06.319 18:35:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.319 18:35:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:06.319 nvme0n1 00:28:06.319 18:35:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.319 18:35:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:28:06.319 18:35:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.319 18:35:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:06.578 [ 00:28:06.578 { 00:28:06.578 "aliases": [ 00:28:06.578 "fdd5fdcc-ffdd-48b4-bd1d-308b73ca6101" 00:28:06.578 ], 00:28:06.578 "assigned_rate_limits": { 00:28:06.578 "r_mbytes_per_sec": 0, 00:28:06.578 "rw_ios_per_sec": 0, 00:28:06.578 "rw_mbytes_per_sec": 0, 00:28:06.578 "w_mbytes_per_sec": 0 00:28:06.579 }, 00:28:06.579 "block_size": 512, 00:28:06.579 "claimed": false, 00:28:06.579 "driver_specific": { 00:28:06.579 "mp_policy": "active_passive", 00:28:06.579 "nvme": [ 00:28:06.579 { 00:28:06.579 "ctrlr_data": { 00:28:06.579 "ana_reporting": false, 00:28:06.579 "cntlid": 1, 00:28:06.579 "firmware_revision": "24.09", 00:28:06.579 "model_number": "SPDK bdev Controller", 00:28:06.579 "multi_ctrlr": true, 00:28:06.579 "oacs": { 00:28:06.579 "firmware": 0, 00:28:06.579 "format": 0, 00:28:06.579 "ns_manage": 0, 00:28:06.579 "security": 0 00:28:06.579 }, 00:28:06.579 "serial_number": "00000000000000000000", 00:28:06.579 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:06.579 "vendor_id": "0x8086" 00:28:06.579 }, 00:28:06.579 "ns_data": { 00:28:06.579 "can_share": true, 00:28:06.579 "id": 1 00:28:06.579 }, 00:28:06.579 "trid": { 00:28:06.579 "adrfam": "IPv4", 00:28:06.579 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:06.579 "traddr": "10.0.0.2", 00:28:06.579 "trsvcid": "4420", 00:28:06.579 "trtype": "TCP" 00:28:06.579 }, 00:28:06.579 "vs": { 00:28:06.579 "nvme_version": "1.3" 00:28:06.579 } 00:28:06.579 } 00:28:06.579 ] 00:28:06.579 }, 00:28:06.579 "memory_domains": [ 00:28:06.579 { 00:28:06.579 "dma_device_id": "system", 00:28:06.579 "dma_device_type": 1 00:28:06.579 } 00:28:06.579 ], 00:28:06.579 "name": "nvme0n1", 00:28:06.579 "num_blocks": 2097152, 00:28:06.579 "product_name": "NVMe disk", 00:28:06.579 "supported_io_types": { 00:28:06.579 "abort": true, 00:28:06.579 "compare": true, 00:28:06.579 "compare_and_write": true, 00:28:06.579 "copy": true, 00:28:06.579 "flush": true, 00:28:06.579 "get_zone_info": false, 00:28:06.579 "nvme_admin": true, 00:28:06.579 "nvme_io": true, 00:28:06.579 "nvme_io_md": false, 00:28:06.579 "nvme_iov_md": false, 00:28:06.579 "read": true, 00:28:06.579 "reset": true, 00:28:06.579 "seek_data": false, 00:28:06.579 "seek_hole": false, 00:28:06.579 "unmap": false, 00:28:06.579 "write": true, 00:28:06.579 "write_zeroes": true, 00:28:06.579 "zcopy": false, 00:28:06.579 "zone_append": false, 00:28:06.579 "zone_management": false 00:28:06.579 }, 00:28:06.579 "uuid": "fdd5fdcc-ffdd-48b4-bd1d-308b73ca6101", 00:28:06.579 "zoned": false 00:28:06.579 } 00:28:06.579 ] 00:28:06.579 18:35:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.579 18:35:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:28:06.579 18:35:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.579 18:35:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:06.579 [2024-07-22 18:35:18.366721] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:06.579 [2024-07-22 18:35:18.367046] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:28:06.579 [2024-07-22 18:35:18.509286] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:06.579 18:35:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.579 18:35:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:28:06.579 18:35:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.579 18:35:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:06.579 [ 00:28:06.579 { 00:28:06.579 "aliases": [ 00:28:06.579 "fdd5fdcc-ffdd-48b4-bd1d-308b73ca6101" 00:28:06.579 ], 00:28:06.579 "assigned_rate_limits": { 00:28:06.579 "r_mbytes_per_sec": 0, 00:28:06.579 "rw_ios_per_sec": 0, 00:28:06.579 "rw_mbytes_per_sec": 0, 00:28:06.579 "w_mbytes_per_sec": 0 00:28:06.579 }, 00:28:06.579 "block_size": 512, 00:28:06.579 "claimed": false, 00:28:06.579 "driver_specific": { 00:28:06.579 "mp_policy": "active_passive", 00:28:06.579 "nvme": [ 00:28:06.579 { 00:28:06.579 "ctrlr_data": { 00:28:06.579 "ana_reporting": false, 00:28:06.579 "cntlid": 2, 00:28:06.579 "firmware_revision": "24.09", 00:28:06.579 "model_number": "SPDK bdev Controller", 00:28:06.579 "multi_ctrlr": true, 00:28:06.579 "oacs": { 00:28:06.579 "firmware": 0, 00:28:06.579 "format": 0, 00:28:06.579 "ns_manage": 0, 00:28:06.579 "security": 0 00:28:06.579 }, 00:28:06.579 "serial_number": "00000000000000000000", 00:28:06.579 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:06.579 "vendor_id": "0x8086" 00:28:06.579 }, 00:28:06.579 "ns_data": { 00:28:06.579 "can_share": true, 00:28:06.579 "id": 1 00:28:06.579 }, 00:28:06.579 "trid": { 00:28:06.579 "adrfam": "IPv4", 00:28:06.579 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:06.579 "traddr": "10.0.0.2", 00:28:06.579 "trsvcid": "4420", 00:28:06.579 "trtype": "TCP" 00:28:06.579 }, 00:28:06.579 "vs": { 00:28:06.579 "nvme_version": "1.3" 00:28:06.579 } 00:28:06.579 } 00:28:06.579 ] 00:28:06.579 }, 00:28:06.579 "memory_domains": [ 00:28:06.579 { 00:28:06.579 "dma_device_id": "system", 00:28:06.579 "dma_device_type": 1 00:28:06.579 } 00:28:06.579 ], 00:28:06.579 "name": "nvme0n1", 00:28:06.579 "num_blocks": 2097152, 00:28:06.579 "product_name": "NVMe disk", 00:28:06.579 "supported_io_types": { 00:28:06.579 "abort": true, 00:28:06.579 "compare": true, 00:28:06.579 "compare_and_write": true, 00:28:06.579 "copy": true, 00:28:06.579 "flush": true, 00:28:06.579 "get_zone_info": false, 00:28:06.579 "nvme_admin": true, 00:28:06.579 "nvme_io": true, 00:28:06.579 "nvme_io_md": false, 00:28:06.579 "nvme_iov_md": false, 00:28:06.579 "read": true, 00:28:06.579 "reset": true, 00:28:06.579 "seek_data": false, 00:28:06.579 "seek_hole": false, 00:28:06.579 "unmap": false, 00:28:06.579 "write": true, 00:28:06.579 "write_zeroes": true, 00:28:06.579 "zcopy": false, 00:28:06.579 "zone_append": false, 00:28:06.579 "zone_management": false 00:28:06.579 }, 00:28:06.579 "uuid": "fdd5fdcc-ffdd-48b4-bd1d-308b73ca6101", 00:28:06.579 "zoned": false 00:28:06.579 } 00:28:06.579 ] 00:28:06.579 18:35:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.579 18:35:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:06.579 18:35:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.579 18:35:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:06.579 18:35:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.579 18:35:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:28:06.579 18:35:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.t8X3whfBao 00:28:06.579 18:35:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:28:06.579 18:35:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.t8X3whfBao 00:28:06.579 18:35:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:28:06.579 18:35:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.579 18:35:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:06.579 18:35:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.579 18:35:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:28:06.579 18:35:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.579 18:35:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:06.579 [2024-07-22 18:35:18.584216] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:28:06.579 [2024-07-22 18:35:18.584532] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:06.579 18:35:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.579 18:35:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.t8X3whfBao 00:28:06.579 18:35:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.579 18:35:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:06.579 [2024-07-22 18:35:18.592238] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:28:06.838 18:35:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.838 18:35:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.t8X3whfBao 00:28:06.838 18:35:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.838 18:35:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:06.838 [2024-07-22 18:35:18.600187] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:28:06.838 [2024-07-22 18:35:18.600336] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:28:06.838 nvme0n1 00:28:06.838 18:35:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.838 18:35:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:28:06.838 18:35:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.838 18:35:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:06.838 [ 00:28:06.838 { 00:28:06.838 "aliases": [ 00:28:06.838 "fdd5fdcc-ffdd-48b4-bd1d-308b73ca6101" 00:28:06.838 ], 00:28:06.838 "assigned_rate_limits": { 00:28:06.838 "r_mbytes_per_sec": 0, 00:28:06.838 "rw_ios_per_sec": 0, 00:28:06.838 "rw_mbytes_per_sec": 0, 00:28:06.838 "w_mbytes_per_sec": 0 00:28:06.838 }, 00:28:06.838 "block_size": 512, 00:28:06.838 "claimed": false, 00:28:06.838 "driver_specific": { 00:28:06.838 "mp_policy": "active_passive", 00:28:06.838 "nvme": [ 00:28:06.838 { 00:28:06.838 "ctrlr_data": { 00:28:06.838 "ana_reporting": false, 00:28:06.838 "cntlid": 3, 00:28:06.838 "firmware_revision": "24.09", 00:28:06.838 "model_number": "SPDK bdev Controller", 00:28:06.838 "multi_ctrlr": true, 00:28:06.838 "oacs": { 00:28:06.838 "firmware": 0, 00:28:06.838 "format": 0, 00:28:06.838 "ns_manage": 0, 00:28:06.838 "security": 0 00:28:06.838 }, 00:28:06.838 "serial_number": "00000000000000000000", 00:28:06.838 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:06.838 "vendor_id": "0x8086" 00:28:06.838 }, 00:28:06.838 "ns_data": { 00:28:06.838 "can_share": true, 00:28:06.838 "id": 1 00:28:06.838 }, 00:28:06.838 "trid": { 00:28:06.838 "adrfam": "IPv4", 00:28:06.838 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:06.838 "traddr": "10.0.0.2", 00:28:06.838 "trsvcid": "4421", 00:28:06.838 "trtype": "TCP" 00:28:06.838 }, 00:28:06.839 "vs": { 00:28:06.839 "nvme_version": "1.3" 00:28:06.839 } 00:28:06.839 } 00:28:06.839 ] 00:28:06.839 }, 00:28:06.839 "memory_domains": [ 00:28:06.839 { 00:28:06.839 "dma_device_id": "system", 00:28:06.839 "dma_device_type": 1 00:28:06.839 } 00:28:06.839 ], 00:28:06.839 "name": "nvme0n1", 00:28:06.839 "num_blocks": 2097152, 00:28:06.839 "product_name": "NVMe disk", 00:28:06.839 "supported_io_types": { 00:28:06.839 "abort": true, 00:28:06.839 "compare": true, 00:28:06.839 "compare_and_write": true, 00:28:06.839 "copy": true, 00:28:06.839 "flush": true, 00:28:06.839 "get_zone_info": false, 00:28:06.839 "nvme_admin": true, 00:28:06.839 "nvme_io": true, 00:28:06.839 "nvme_io_md": false, 00:28:06.839 "nvme_iov_md": false, 00:28:06.839 "read": true, 00:28:06.839 "reset": true, 00:28:06.839 "seek_data": false, 00:28:06.839 "seek_hole": false, 00:28:06.839 "unmap": false, 00:28:06.839 "write": true, 00:28:06.839 "write_zeroes": true, 00:28:06.839 "zcopy": false, 00:28:06.839 "zone_append": false, 00:28:06.839 "zone_management": false 00:28:06.839 }, 00:28:06.839 "uuid": "fdd5fdcc-ffdd-48b4-bd1d-308b73ca6101", 00:28:06.839 "zoned": false 00:28:06.839 } 00:28:06.839 ] 00:28:06.839 18:35:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.839 18:35:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:06.839 18:35:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.839 18:35:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:06.839 18:35:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.839 18:35:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.t8X3whfBao 00:28:06.839 18:35:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:28:06.839 18:35:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:28:06.839 18:35:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:06.839 18:35:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:28:06.839 18:35:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:06.839 18:35:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:28:06.839 18:35:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:06.839 18:35:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:06.839 rmmod nvme_tcp 00:28:06.839 rmmod nvme_fabrics 00:28:06.839 rmmod nvme_keyring 00:28:06.839 18:35:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:06.839 18:35:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:28:06.839 18:35:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:28:06.839 18:35:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 96756 ']' 00:28:06.839 18:35:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 96756 00:28:06.839 18:35:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@948 -- # '[' -z 96756 ']' 00:28:06.839 18:35:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@952 -- # kill -0 96756 00:28:06.839 18:35:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@953 -- # uname 00:28:06.839 18:35:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:06.839 18:35:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 96756 00:28:07.098 18:35:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:07.098 killing process with pid 96756 00:28:07.098 18:35:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:07.098 18:35:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 96756' 00:28:07.098 18:35:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@967 -- # kill 96756 00:28:07.098 [2024-07-22 18:35:18.874727] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:28:07.098 [2024-07-22 18:35:18.874797] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:28:07.098 18:35:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # wait 96756 00:28:08.470 18:35:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:08.470 18:35:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:08.470 18:35:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:08.470 18:35:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:08.470 18:35:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:08.470 18:35:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:08.470 18:35:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:08.470 18:35:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:08.470 18:35:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:28:08.470 00:28:08.470 real 0m3.795s 00:28:08.470 user 0m3.481s 00:28:08.470 sys 0m0.839s 00:28:08.470 ************************************ 00:28:08.470 END TEST nvmf_async_init 00:28:08.470 ************************************ 00:28:08.470 18:35:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:08.470 18:35:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:08.470 18:35:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:28:08.470 18:35:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:28:08.470 18:35:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:08.470 18:35:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:08.470 18:35:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.470 ************************************ 00:28:08.470 START TEST dma 00:28:08.470 ************************************ 00:28:08.470 18:35:20 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:28:08.470 * Looking for test storage... 00:28:08.470 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:28:08.470 18:35:20 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:08.470 18:35:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:28:08.470 18:35:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:08.470 18:35:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:08.470 18:35:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:08.470 18:35:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:08.470 18:35:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:08.470 18:35:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:08.470 18:35:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:08.470 18:35:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:08.470 18:35:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:08.470 18:35:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:08.470 18:35:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:28:08.470 18:35:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=0b8484e2-e129-4a11-8748-0b3c728771da 00:28:08.470 18:35:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:08.470 18:35:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:08.470 18:35:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:28:08.470 18:35:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:08.470 18:35:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:08.470 18:35:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:08.470 18:35:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:08.470 18:35:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:08.470 18:35:20 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:08.471 18:35:20 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:08.471 18:35:20 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:08.471 18:35:20 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:28:08.471 18:35:20 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:08.471 18:35:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@47 -- # : 0 00:28:08.471 18:35:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:08.471 18:35:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:08.471 18:35:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:08.471 18:35:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:08.471 18:35:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:08.471 18:35:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:08.471 18:35:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:08.471 18:35:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:08.471 18:35:20 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:28:08.471 18:35:20 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:28:08.471 00:28:08.471 real 0m0.101s 00:28:08.471 user 0m0.052s 00:28:08.471 sys 0m0.056s 00:28:08.471 18:35:20 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:08.471 18:35:20 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:28:08.471 ************************************ 00:28:08.471 END TEST dma 00:28:08.471 ************************************ 00:28:08.471 18:35:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:28:08.471 18:35:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:28:08.471 18:35:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:08.471 18:35:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:08.471 18:35:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.471 ************************************ 00:28:08.471 START TEST nvmf_identify 00:28:08.471 ************************************ 00:28:08.471 18:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:28:08.471 * Looking for test storage... 00:28:08.471 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:28:08.471 18:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:08.471 18:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:28:08.729 18:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:08.729 18:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:08.729 18:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:08.729 18:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:08.729 18:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:08.729 18:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:08.729 18:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:08.729 18:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:08.729 18:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:08.729 18:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:08.729 18:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:28:08.729 18:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=0b8484e2-e129-4a11-8748-0b3c728771da 00:28:08.729 18:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:08.729 18:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:08.729 18:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:28:08.729 18:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:08.729 18:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:08.729 18:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:08.729 18:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:08.729 18:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:08.729 18:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:08.729 18:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:08.729 18:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:08.730 18:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:28:08.730 18:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:08.730 18:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:28:08.730 18:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:08.730 18:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:08.730 18:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:08.730 18:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:08.730 18:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:08.730 18:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:08.730 18:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:08.730 18:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:08.730 18:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:08.730 18:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:08.730 18:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:28:08.730 18:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:08.730 18:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:08.730 18:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:08.730 18:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:08.730 18:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:08.730 18:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:08.730 18:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:08.730 18:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:08.730 18:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:28:08.730 18:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:28:08.730 18:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:28:08.730 18:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:28:08.730 18:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:28:08.730 18:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # nvmf_veth_init 00:28:08.730 18:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:08.730 18:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:08.730 18:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:28:08.730 18:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:28:08.730 18:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:28:08.730 18:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:28:08.730 18:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:28:08.730 18:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:08.730 18:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:28:08.730 18:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:28:08.730 18:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:28:08.730 18:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:28:08.730 18:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:28:08.730 18:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:28:08.730 Cannot find device "nvmf_tgt_br" 00:28:08.730 18:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@155 -- # true 00:28:08.730 18:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:28:08.730 Cannot find device "nvmf_tgt_br2" 00:28:08.730 18:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # true 00:28:08.730 18:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:28:08.730 18:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:28:08.730 Cannot find device "nvmf_tgt_br" 00:28:08.730 18:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@158 -- # true 00:28:08.730 18:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:28:08.730 Cannot find device "nvmf_tgt_br2" 00:28:08.730 18:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # true 00:28:08.730 18:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:28:08.730 18:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:28:08.730 18:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:08.730 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:08.730 18:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # true 00:28:08.730 18:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:08.730 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:08.730 18:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # true 00:28:08.730 18:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:28:08.730 18:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:28:08.730 18:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:28:08.730 18:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:28:08.730 18:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:28:08.730 18:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:28:08.730 18:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:28:08.988 18:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:28:08.988 18:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:28:08.988 18:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:28:08.988 18:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:28:08.989 18:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:28:08.989 18:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:28:08.989 18:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:28:08.989 18:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:28:08.989 18:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:28:08.989 18:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:28:08.989 18:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:28:08.989 18:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:28:08.989 18:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:28:08.989 18:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:28:08.989 18:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:28:08.989 18:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:28:08.989 18:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:28:08.989 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:08.989 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.139 ms 00:28:08.989 00:28:08.989 --- 10.0.0.2 ping statistics --- 00:28:08.989 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:08.989 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:28:08.989 18:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:28:08.989 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:28:08.989 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:28:08.989 00:28:08.989 --- 10.0.0.3 ping statistics --- 00:28:08.989 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:08.989 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:28:08.989 18:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:28:08.989 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:08.989 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:28:08.989 00:28:08.989 --- 10.0.0.1 ping statistics --- 00:28:08.989 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:08.989 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:28:08.989 18:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:08.989 18:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@433 -- # return 0 00:28:08.989 18:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:08.989 18:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:08.989 18:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:08.989 18:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:08.989 18:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:08.989 18:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:08.989 18:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:08.989 18:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:28:08.989 18:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:08.989 18:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:08.989 18:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=97035 00:28:08.989 18:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:08.989 18:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:08.989 18:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 97035 00:28:08.989 18:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 97035 ']' 00:28:08.989 18:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:08.989 18:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:08.989 18:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:08.989 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:08.989 18:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:08.989 18:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:09.247 [2024-07-22 18:35:21.065650] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:28:09.247 [2024-07-22 18:35:21.066064] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:09.505 [2024-07-22 18:35:21.274155] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:09.763 [2024-07-22 18:35:21.579394] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:09.764 [2024-07-22 18:35:21.579493] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:09.764 [2024-07-22 18:35:21.579520] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:09.764 [2024-07-22 18:35:21.579544] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:09.764 [2024-07-22 18:35:21.579576] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:09.764 [2024-07-22 18:35:21.579898] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:09.764 [2024-07-22 18:35:21.580703] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:09.764 [2024-07-22 18:35:21.580893] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:09.764 [2024-07-22 18:35:21.580958] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:10.022 18:35:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:10.022 18:35:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:28:10.022 18:35:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:10.022 18:35:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.022 18:35:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:10.022 [2024-07-22 18:35:21.993631] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:10.022 18:35:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.022 18:35:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:28:10.022 18:35:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:10.022 18:35:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:10.287 18:35:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:10.287 18:35:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.287 18:35:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:10.287 Malloc0 00:28:10.287 18:35:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.287 18:35:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:10.287 18:35:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.287 18:35:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:10.287 18:35:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.287 18:35:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:28:10.287 18:35:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.287 18:35:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:10.287 18:35:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.287 18:35:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:10.287 18:35:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.287 18:35:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:10.287 [2024-07-22 18:35:22.155549] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:10.287 18:35:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.287 18:35:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:10.287 18:35:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.287 18:35:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:10.287 18:35:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.287 18:35:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:28:10.287 18:35:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.287 18:35:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:10.287 [ 00:28:10.287 { 00:28:10.287 "allow_any_host": true, 00:28:10.287 "hosts": [], 00:28:10.287 "listen_addresses": [ 00:28:10.287 { 00:28:10.287 "adrfam": "IPv4", 00:28:10.287 "traddr": "10.0.0.2", 00:28:10.287 "trsvcid": "4420", 00:28:10.287 "trtype": "TCP" 00:28:10.287 } 00:28:10.287 ], 00:28:10.287 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:10.287 "subtype": "Discovery" 00:28:10.287 }, 00:28:10.287 { 00:28:10.287 "allow_any_host": true, 00:28:10.287 "hosts": [], 00:28:10.287 "listen_addresses": [ 00:28:10.287 { 00:28:10.287 "adrfam": "IPv4", 00:28:10.287 "traddr": "10.0.0.2", 00:28:10.287 "trsvcid": "4420", 00:28:10.287 "trtype": "TCP" 00:28:10.287 } 00:28:10.287 ], 00:28:10.287 "max_cntlid": 65519, 00:28:10.287 "max_namespaces": 32, 00:28:10.287 "min_cntlid": 1, 00:28:10.287 "model_number": "SPDK bdev Controller", 00:28:10.287 "namespaces": [ 00:28:10.287 { 00:28:10.287 "bdev_name": "Malloc0", 00:28:10.287 "eui64": "ABCDEF0123456789", 00:28:10.287 "name": "Malloc0", 00:28:10.287 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:28:10.287 "nsid": 1, 00:28:10.287 "uuid": "a9dcf368-6e43-417c-8b55-0f6a00fd82e2" 00:28:10.287 } 00:28:10.287 ], 00:28:10.287 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:10.287 "serial_number": "SPDK00000000000001", 00:28:10.287 "subtype": "NVMe" 00:28:10.287 } 00:28:10.287 ] 00:28:10.287 18:35:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.287 18:35:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:28:10.287 [2024-07-22 18:35:22.246298] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:28:10.288 [2024-07-22 18:35:22.246451] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97088 ] 00:28:10.548 [2024-07-22 18:35:22.423392] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:28:10.548 [2024-07-22 18:35:22.423578] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:28:10.548 [2024-07-22 18:35:22.423607] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:28:10.548 [2024-07-22 18:35:22.423655] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:28:10.548 [2024-07-22 18:35:22.423703] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:28:10.548 [2024-07-22 18:35:22.423993] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:28:10.548 [2024-07-22 18:35:22.424120] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x61500000f080 0 00:28:10.548 [2024-07-22 18:35:22.429875] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:28:10.548 [2024-07-22 18:35:22.429929] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:28:10.548 [2024-07-22 18:35:22.429951] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:28:10.548 [2024-07-22 18:35:22.429969] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:28:10.548 [2024-07-22 18:35:22.430134] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:10.548 [2024-07-22 18:35:22.430180] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:10.548 [2024-07-22 18:35:22.430198] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:28:10.548 [2024-07-22 18:35:22.430254] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:28:10.548 [2024-07-22 18:35:22.430330] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:28:10.548 [2024-07-22 18:35:22.440900] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:10.548 [2024-07-22 18:35:22.440977] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:10.548 [2024-07-22 18:35:22.441010] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:10.548 [2024-07-22 18:35:22.441029] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:28:10.548 [2024-07-22 18:35:22.441084] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:28:10.548 [2024-07-22 18:35:22.441145] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:28:10.548 [2024-07-22 18:35:22.441186] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:28:10.548 [2024-07-22 18:35:22.441239] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:10.548 [2024-07-22 18:35:22.441262] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:10.548 [2024-07-22 18:35:22.441279] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:28:10.548 [2024-07-22 18:35:22.441324] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.548 [2024-07-22 18:35:22.441410] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:28:10.548 [2024-07-22 18:35:22.441592] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:10.548 [2024-07-22 18:35:22.441630] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:10.548 [2024-07-22 18:35:22.441653] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:10.548 [2024-07-22 18:35:22.441689] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:28:10.548 [2024-07-22 18:35:22.441709] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:28:10.548 [2024-07-22 18:35:22.441734] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:28:10.548 [2024-07-22 18:35:22.441757] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:10.548 [2024-07-22 18:35:22.441770] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:10.548 [2024-07-22 18:35:22.441781] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:28:10.548 [2024-07-22 18:35:22.441820] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.548 [2024-07-22 18:35:22.441907] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:28:10.548 [2024-07-22 18:35:22.442008] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:10.548 [2024-07-22 18:35:22.442041] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:10.548 [2024-07-22 18:35:22.442069] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:10.548 [2024-07-22 18:35:22.442085] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:28:10.548 [2024-07-22 18:35:22.442107] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:28:10.548 [2024-07-22 18:35:22.442150] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:28:10.548 [2024-07-22 18:35:22.442176] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:10.548 [2024-07-22 18:35:22.442189] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:10.548 [2024-07-22 18:35:22.442200] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:28:10.548 [2024-07-22 18:35:22.442224] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.548 [2024-07-22 18:35:22.442278] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:28:10.548 [2024-07-22 18:35:22.442366] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:10.548 [2024-07-22 18:35:22.442394] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:10.548 [2024-07-22 18:35:22.442407] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:10.548 [2024-07-22 18:35:22.442422] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:28:10.548 [2024-07-22 18:35:22.442443] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:28:10.548 [2024-07-22 18:35:22.442477] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:10.548 [2024-07-22 18:35:22.442499] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:10.548 [2024-07-22 18:35:22.442524] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:28:10.548 [2024-07-22 18:35:22.442557] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.548 [2024-07-22 18:35:22.442611] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:28:10.549 [2024-07-22 18:35:22.442695] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:10.549 [2024-07-22 18:35:22.442722] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:10.549 [2024-07-22 18:35:22.442736] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:10.549 [2024-07-22 18:35:22.442750] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:28:10.549 [2024-07-22 18:35:22.442779] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:28:10.549 [2024-07-22 18:35:22.442803] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:28:10.549 [2024-07-22 18:35:22.442846] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:28:10.549 [2024-07-22 18:35:22.442980] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:28:10.549 [2024-07-22 18:35:22.443010] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:28:10.549 [2024-07-22 18:35:22.443045] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:10.549 [2024-07-22 18:35:22.443062] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:10.549 [2024-07-22 18:35:22.443077] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:28:10.549 [2024-07-22 18:35:22.443114] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.549 [2024-07-22 18:35:22.443176] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:28:10.549 [2024-07-22 18:35:22.443278] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:10.549 [2024-07-22 18:35:22.443311] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:10.549 [2024-07-22 18:35:22.443325] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:10.549 [2024-07-22 18:35:22.443350] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:28:10.549 [2024-07-22 18:35:22.443367] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:28:10.549 [2024-07-22 18:35:22.443394] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:10.549 [2024-07-22 18:35:22.443410] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:10.549 [2024-07-22 18:35:22.443425] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:28:10.549 [2024-07-22 18:35:22.443451] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.549 [2024-07-22 18:35:22.443507] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:28:10.549 [2024-07-22 18:35:22.443594] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:10.549 [2024-07-22 18:35:22.443615] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:10.549 [2024-07-22 18:35:22.443629] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:10.549 [2024-07-22 18:35:22.443642] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:28:10.549 [2024-07-22 18:35:22.443657] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:28:10.549 [2024-07-22 18:35:22.443676] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:28:10.549 [2024-07-22 18:35:22.443711] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:28:10.549 [2024-07-22 18:35:22.443757] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:28:10.549 [2024-07-22 18:35:22.443806] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:10.549 [2024-07-22 18:35:22.443826] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:28:10.549 [2024-07-22 18:35:22.443881] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.549 [2024-07-22 18:35:22.443973] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:28:10.549 [2024-07-22 18:35:22.444123] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:10.549 [2024-07-22 18:35:22.444154] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:10.549 [2024-07-22 18:35:22.444169] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:10.549 [2024-07-22 18:35:22.444184] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=4096, cccid=0 00:28:10.549 [2024-07-22 18:35:22.444211] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b100) on tqpair(0x61500000f080): expected_datao=0, payload_size=4096 00:28:10.549 [2024-07-22 18:35:22.444230] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:10.549 [2024-07-22 18:35:22.444260] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:10.549 [2024-07-22 18:35:22.444288] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:10.549 [2024-07-22 18:35:22.444326] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:10.549 [2024-07-22 18:35:22.444348] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:10.549 [2024-07-22 18:35:22.444361] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:10.549 [2024-07-22 18:35:22.444375] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:28:10.549 [2024-07-22 18:35:22.444410] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:28:10.549 [2024-07-22 18:35:22.444432] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:28:10.549 [2024-07-22 18:35:22.444449] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:28:10.549 [2024-07-22 18:35:22.444482] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:28:10.549 [2024-07-22 18:35:22.444502] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:28:10.549 [2024-07-22 18:35:22.444520] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:28:10.549 [2024-07-22 18:35:22.444549] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:28:10.549 [2024-07-22 18:35:22.444576] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:10.549 [2024-07-22 18:35:22.444605] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:10.549 [2024-07-22 18:35:22.444621] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:28:10.549 [2024-07-22 18:35:22.444649] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:10.549 [2024-07-22 18:35:22.444710] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:28:10.549 [2024-07-22 18:35:22.444823] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:10.549 [2024-07-22 18:35:22.448886] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:10.549 [2024-07-22 18:35:22.448908] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:10.549 [2024-07-22 18:35:22.448923] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:28:10.549 [2024-07-22 18:35:22.448951] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:10.549 [2024-07-22 18:35:22.448969] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:10.549 [2024-07-22 18:35:22.448984] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:28:10.549 [2024-07-22 18:35:22.449039] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:10.549 [2024-07-22 18:35:22.449066] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:10.549 [2024-07-22 18:35:22.449081] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:10.549 [2024-07-22 18:35:22.449094] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x61500000f080) 00:28:10.549 [2024-07-22 18:35:22.449131] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:10.549 [2024-07-22 18:35:22.449157] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:10.549 [2024-07-22 18:35:22.449170] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:10.549 [2024-07-22 18:35:22.449183] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x61500000f080) 00:28:10.549 [2024-07-22 18:35:22.449203] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:10.549 [2024-07-22 18:35:22.449223] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:10.549 [2024-07-22 18:35:22.449239] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:10.549 [2024-07-22 18:35:22.449251] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:28:10.549 [2024-07-22 18:35:22.449272] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:10.549 [2024-07-22 18:35:22.449293] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:28:10.549 [2024-07-22 18:35:22.449334] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:28:10.549 [2024-07-22 18:35:22.449371] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:10.549 [2024-07-22 18:35:22.449384] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:28:10.549 [2024-07-22 18:35:22.449413] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.549 [2024-07-22 18:35:22.449477] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:28:10.549 [2024-07-22 18:35:22.449502] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b280, cid 1, qid 0 00:28:10.549 [2024-07-22 18:35:22.449519] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b400, cid 2, qid 0 00:28:10.549 [2024-07-22 18:35:22.449534] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:28:10.549 [2024-07-22 18:35:22.449550] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:28:10.549 [2024-07-22 18:35:22.449676] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:10.549 [2024-07-22 18:35:22.449711] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:10.549 [2024-07-22 18:35:22.449728] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:10.549 [2024-07-22 18:35:22.449742] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:28:10.550 [2024-07-22 18:35:22.449764] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:28:10.550 [2024-07-22 18:35:22.449784] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:28:10.550 [2024-07-22 18:35:22.449845] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:10.550 [2024-07-22 18:35:22.449868] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:28:10.550 [2024-07-22 18:35:22.449899] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.550 [2024-07-22 18:35:22.449967] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:28:10.550 [2024-07-22 18:35:22.450106] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:10.550 [2024-07-22 18:35:22.450138] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:10.550 [2024-07-22 18:35:22.450156] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:10.550 [2024-07-22 18:35:22.450182] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=4096, cccid=4 00:28:10.550 [2024-07-22 18:35:22.450200] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=4096 00:28:10.550 [2024-07-22 18:35:22.450217] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:10.550 [2024-07-22 18:35:22.450255] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:10.550 [2024-07-22 18:35:22.450276] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:10.550 [2024-07-22 18:35:22.450306] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:10.550 [2024-07-22 18:35:22.450327] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:10.550 [2024-07-22 18:35:22.450339] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:10.550 [2024-07-22 18:35:22.450353] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:28:10.550 [2024-07-22 18:35:22.450419] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:28:10.550 [2024-07-22 18:35:22.450554] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:10.550 [2024-07-22 18:35:22.450595] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:28:10.550 [2024-07-22 18:35:22.450627] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.550 [2024-07-22 18:35:22.450662] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:10.550 [2024-07-22 18:35:22.450681] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:10.550 [2024-07-22 18:35:22.450697] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500000f080) 00:28:10.550 [2024-07-22 18:35:22.450729] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:28:10.550 [2024-07-22 18:35:22.450791] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:28:10.550 [2024-07-22 18:35:22.450816] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:28:10.550 [2024-07-22 18:35:22.451269] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:10.550 [2024-07-22 18:35:22.451316] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:10.550 [2024-07-22 18:35:22.451335] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:10.550 [2024-07-22 18:35:22.451349] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=1024, cccid=4 00:28:10.550 [2024-07-22 18:35:22.451365] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=1024 00:28:10.550 [2024-07-22 18:35:22.451380] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:10.550 [2024-07-22 18:35:22.451411] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:10.550 [2024-07-22 18:35:22.451430] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:10.550 [2024-07-22 18:35:22.451448] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:10.550 [2024-07-22 18:35:22.451467] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:10.550 [2024-07-22 18:35:22.451479] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:10.550 [2024-07-22 18:35:22.451496] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500000f080 00:28:10.550 [2024-07-22 18:35:22.491979] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:10.550 [2024-07-22 18:35:22.492039] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:10.550 [2024-07-22 18:35:22.492055] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:10.550 [2024-07-22 18:35:22.492072] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:28:10.550 [2024-07-22 18:35:22.492138] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:10.550 [2024-07-22 18:35:22.492159] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:28:10.550 [2024-07-22 18:35:22.492219] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.550 [2024-07-22 18:35:22.492296] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:28:10.550 [2024-07-22 18:35:22.492508] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:10.550 [2024-07-22 18:35:22.492547] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:10.550 [2024-07-22 18:35:22.492564] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:10.550 [2024-07-22 18:35:22.492579] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=3072, cccid=4 00:28:10.550 [2024-07-22 18:35:22.492592] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=3072 00:28:10.550 [2024-07-22 18:35:22.492604] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:10.550 [2024-07-22 18:35:22.492628] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:10.550 [2024-07-22 18:35:22.492642] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:10.550 [2024-07-22 18:35:22.492663] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:10.550 [2024-07-22 18:35:22.492683] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:10.550 [2024-07-22 18:35:22.492696] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:10.550 [2024-07-22 18:35:22.492711] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:28:10.550 [2024-07-22 18:35:22.492750] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:10.550 [2024-07-22 18:35:22.492771] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:28:10.550 [2024-07-22 18:35:22.492800] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.550 [2024-07-22 18:35:22.492892] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:28:10.550 [2024-07-22 18:35:22.493027] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:10.550 [2024-07-22 18:35:22.493054] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:10.550 [2024-07-22 18:35:22.493067] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:10.550 [2024-07-22 18:35:22.493080] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=8, cccid=4 00:28:10.550 [2024-07-22 18:35:22.493096] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=8 00:28:10.550 [2024-07-22 18:35:22.493142] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:10.550 [2024-07-22 18:35:22.493177] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:10.550 [2024-07-22 18:35:22.493190] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:10.550 [2024-07-22 18:35:22.534895] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:10.550 [2024-07-22 18:35:22.534977] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:10.550 [2024-07-22 18:35:22.534995] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:10.550 [2024-07-22 18:35:22.535012] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:28:10.550 ===================================================== 00:28:10.550 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:28:10.550 ===================================================== 00:28:10.550 Controller Capabilities/Features 00:28:10.550 ================================ 00:28:10.550 Vendor ID: 0000 00:28:10.550 Subsystem Vendor ID: 0000 00:28:10.550 Serial Number: .................... 00:28:10.550 Model Number: ........................................ 00:28:10.550 Firmware Version: 24.09 00:28:10.550 Recommended Arb Burst: 0 00:28:10.550 IEEE OUI Identifier: 00 00 00 00:28:10.550 Multi-path I/O 00:28:10.550 May have multiple subsystem ports: No 00:28:10.550 May have multiple controllers: No 00:28:10.550 Associated with SR-IOV VF: No 00:28:10.550 Max Data Transfer Size: 131072 00:28:10.550 Max Number of Namespaces: 0 00:28:10.550 Max Number of I/O Queues: 1024 00:28:10.550 NVMe Specification Version (VS): 1.3 00:28:10.550 NVMe Specification Version (Identify): 1.3 00:28:10.550 Maximum Queue Entries: 128 00:28:10.550 Contiguous Queues Required: Yes 00:28:10.550 Arbitration Mechanisms Supported 00:28:10.550 Weighted Round Robin: Not Supported 00:28:10.550 Vendor Specific: Not Supported 00:28:10.550 Reset Timeout: 15000 ms 00:28:10.550 Doorbell Stride: 4 bytes 00:28:10.550 NVM Subsystem Reset: Not Supported 00:28:10.550 Command Sets Supported 00:28:10.550 NVM Command Set: Supported 00:28:10.550 Boot Partition: Not Supported 00:28:10.550 Memory Page Size Minimum: 4096 bytes 00:28:10.550 Memory Page Size Maximum: 4096 bytes 00:28:10.550 Persistent Memory Region: Not Supported 00:28:10.550 Optional Asynchronous Events Supported 00:28:10.550 Namespace Attribute Notices: Not Supported 00:28:10.550 Firmware Activation Notices: Not Supported 00:28:10.550 ANA Change Notices: Not Supported 00:28:10.550 PLE Aggregate Log Change Notices: Not Supported 00:28:10.550 LBA Status Info Alert Notices: Not Supported 00:28:10.550 EGE Aggregate Log Change Notices: Not Supported 00:28:10.550 Normal NVM Subsystem Shutdown event: Not Supported 00:28:10.550 Zone Descriptor Change Notices: Not Supported 00:28:10.550 Discovery Log Change Notices: Supported 00:28:10.550 Controller Attributes 00:28:10.550 128-bit Host Identifier: Not Supported 00:28:10.551 Non-Operational Permissive Mode: Not Supported 00:28:10.551 NVM Sets: Not Supported 00:28:10.551 Read Recovery Levels: Not Supported 00:28:10.551 Endurance Groups: Not Supported 00:28:10.551 Predictable Latency Mode: Not Supported 00:28:10.551 Traffic Based Keep ALive: Not Supported 00:28:10.551 Namespace Granularity: Not Supported 00:28:10.551 SQ Associations: Not Supported 00:28:10.551 UUID List: Not Supported 00:28:10.551 Multi-Domain Subsystem: Not Supported 00:28:10.551 Fixed Capacity Management: Not Supported 00:28:10.551 Variable Capacity Management: Not Supported 00:28:10.551 Delete Endurance Group: Not Supported 00:28:10.551 Delete NVM Set: Not Supported 00:28:10.551 Extended LBA Formats Supported: Not Supported 00:28:10.551 Flexible Data Placement Supported: Not Supported 00:28:10.551 00:28:10.551 Controller Memory Buffer Support 00:28:10.551 ================================ 00:28:10.551 Supported: No 00:28:10.551 00:28:10.551 Persistent Memory Region Support 00:28:10.551 ================================ 00:28:10.551 Supported: No 00:28:10.551 00:28:10.551 Admin Command Set Attributes 00:28:10.551 ============================ 00:28:10.551 Security Send/Receive: Not Supported 00:28:10.551 Format NVM: Not Supported 00:28:10.551 Firmware Activate/Download: Not Supported 00:28:10.551 Namespace Management: Not Supported 00:28:10.551 Device Self-Test: Not Supported 00:28:10.551 Directives: Not Supported 00:28:10.551 NVMe-MI: Not Supported 00:28:10.551 Virtualization Management: Not Supported 00:28:10.551 Doorbell Buffer Config: Not Supported 00:28:10.551 Get LBA Status Capability: Not Supported 00:28:10.551 Command & Feature Lockdown Capability: Not Supported 00:28:10.551 Abort Command Limit: 1 00:28:10.551 Async Event Request Limit: 4 00:28:10.551 Number of Firmware Slots: N/A 00:28:10.551 Firmware Slot 1 Read-Only: N/A 00:28:10.551 Firmware Activation Without Reset: N/A 00:28:10.551 Multiple Update Detection Support: N/A 00:28:10.551 Firmware Update Granularity: No Information Provided 00:28:10.551 Per-Namespace SMART Log: No 00:28:10.551 Asymmetric Namespace Access Log Page: Not Supported 00:28:10.551 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:28:10.551 Command Effects Log Page: Not Supported 00:28:10.551 Get Log Page Extended Data: Supported 00:28:10.551 Telemetry Log Pages: Not Supported 00:28:10.551 Persistent Event Log Pages: Not Supported 00:28:10.551 Supported Log Pages Log Page: May Support 00:28:10.551 Commands Supported & Effects Log Page: Not Supported 00:28:10.551 Feature Identifiers & Effects Log Page:May Support 00:28:10.551 NVMe-MI Commands & Effects Log Page: May Support 00:28:10.551 Data Area 4 for Telemetry Log: Not Supported 00:28:10.551 Error Log Page Entries Supported: 128 00:28:10.551 Keep Alive: Not Supported 00:28:10.551 00:28:10.551 NVM Command Set Attributes 00:28:10.551 ========================== 00:28:10.551 Submission Queue Entry Size 00:28:10.551 Max: 1 00:28:10.551 Min: 1 00:28:10.551 Completion Queue Entry Size 00:28:10.551 Max: 1 00:28:10.551 Min: 1 00:28:10.551 Number of Namespaces: 0 00:28:10.551 Compare Command: Not Supported 00:28:10.551 Write Uncorrectable Command: Not Supported 00:28:10.551 Dataset Management Command: Not Supported 00:28:10.551 Write Zeroes Command: Not Supported 00:28:10.551 Set Features Save Field: Not Supported 00:28:10.551 Reservations: Not Supported 00:28:10.551 Timestamp: Not Supported 00:28:10.551 Copy: Not Supported 00:28:10.551 Volatile Write Cache: Not Present 00:28:10.551 Atomic Write Unit (Normal): 1 00:28:10.551 Atomic Write Unit (PFail): 1 00:28:10.551 Atomic Compare & Write Unit: 1 00:28:10.551 Fused Compare & Write: Supported 00:28:10.551 Scatter-Gather List 00:28:10.551 SGL Command Set: Supported 00:28:10.551 SGL Keyed: Supported 00:28:10.551 SGL Bit Bucket Descriptor: Not Supported 00:28:10.551 SGL Metadata Pointer: Not Supported 00:28:10.551 Oversized SGL: Not Supported 00:28:10.551 SGL Metadata Address: Not Supported 00:28:10.551 SGL Offset: Supported 00:28:10.551 Transport SGL Data Block: Not Supported 00:28:10.551 Replay Protected Memory Block: Not Supported 00:28:10.551 00:28:10.551 Firmware Slot Information 00:28:10.551 ========================= 00:28:10.551 Active slot: 0 00:28:10.551 00:28:10.551 00:28:10.551 Error Log 00:28:10.551 ========= 00:28:10.551 00:28:10.551 Active Namespaces 00:28:10.551 ================= 00:28:10.551 Discovery Log Page 00:28:10.551 ================== 00:28:10.551 Generation Counter: 2 00:28:10.551 Number of Records: 2 00:28:10.551 Record Format: 0 00:28:10.551 00:28:10.551 Discovery Log Entry 0 00:28:10.551 ---------------------- 00:28:10.551 Transport Type: 3 (TCP) 00:28:10.551 Address Family: 1 (IPv4) 00:28:10.551 Subsystem Type: 3 (Current Discovery Subsystem) 00:28:10.551 Entry Flags: 00:28:10.551 Duplicate Returned Information: 1 00:28:10.551 Explicit Persistent Connection Support for Discovery: 1 00:28:10.551 Transport Requirements: 00:28:10.551 Secure Channel: Not Required 00:28:10.551 Port ID: 0 (0x0000) 00:28:10.551 Controller ID: 65535 (0xffff) 00:28:10.551 Admin Max SQ Size: 128 00:28:10.551 Transport Service Identifier: 4420 00:28:10.551 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:28:10.551 Transport Address: 10.0.0.2 00:28:10.551 Discovery Log Entry 1 00:28:10.551 ---------------------- 00:28:10.551 Transport Type: 3 (TCP) 00:28:10.551 Address Family: 1 (IPv4) 00:28:10.551 Subsystem Type: 2 (NVM Subsystem) 00:28:10.551 Entry Flags: 00:28:10.551 Duplicate Returned Information: 0 00:28:10.551 Explicit Persistent Connection Support for Discovery: 0 00:28:10.551 Transport Requirements: 00:28:10.551 Secure Channel: Not Required 00:28:10.551 Port ID: 0 (0x0000) 00:28:10.551 Controller ID: 65535 (0xffff) 00:28:10.551 Admin Max SQ Size: 128 00:28:10.551 Transport Service Identifier: 4420 00:28:10.551 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:28:10.551 Transport Address: 10.0.0.2 [2024-07-22 18:35:22.535383] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:28:10.551 [2024-07-22 18:35:22.535435] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:28:10.551 [2024-07-22 18:35:22.535466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.551 [2024-07-22 18:35:22.535483] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b280) on tqpair=0x61500000f080 00:28:10.551 [2024-07-22 18:35:22.535496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.551 [2024-07-22 18:35:22.535518] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b400) on tqpair=0x61500000f080 00:28:10.551 [2024-07-22 18:35:22.535536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.551 [2024-07-22 18:35:22.535553] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:28:10.551 [2024-07-22 18:35:22.535571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.551 [2024-07-22 18:35:22.535620] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:10.551 [2024-07-22 18:35:22.535654] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:10.551 [2024-07-22 18:35:22.535671] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:28:10.551 [2024-07-22 18:35:22.535711] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.551 [2024-07-22 18:35:22.535786] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:28:10.551 [2024-07-22 18:35:22.535945] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:10.551 [2024-07-22 18:35:22.535977] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:10.551 [2024-07-22 18:35:22.535994] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:10.551 [2024-07-22 18:35:22.536011] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:28:10.551 [2024-07-22 18:35:22.536041] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:10.551 [2024-07-22 18:35:22.536060] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:10.551 [2024-07-22 18:35:22.536087] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:28:10.551 [2024-07-22 18:35:22.536116] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.552 [2024-07-22 18:35:22.536185] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:28:10.552 [2024-07-22 18:35:22.536313] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:10.552 [2024-07-22 18:35:22.536356] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:10.552 [2024-07-22 18:35:22.536372] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:10.552 [2024-07-22 18:35:22.536387] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:28:10.552 [2024-07-22 18:35:22.536406] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:28:10.552 [2024-07-22 18:35:22.536425] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:28:10.552 [2024-07-22 18:35:22.536462] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:10.552 [2024-07-22 18:35:22.536482] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:10.552 [2024-07-22 18:35:22.536496] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:28:10.552 [2024-07-22 18:35:22.536543] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.552 [2024-07-22 18:35:22.536600] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:28:10.552 [2024-07-22 18:35:22.536701] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:10.552 [2024-07-22 18:35:22.536736] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:10.552 [2024-07-22 18:35:22.536751] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:10.552 [2024-07-22 18:35:22.536765] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:28:10.552 [2024-07-22 18:35:22.536805] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:10.552 [2024-07-22 18:35:22.536820] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:10.552 [2024-07-22 18:35:22.536829] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:28:10.552 [2024-07-22 18:35:22.536875] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.552 [2024-07-22 18:35:22.536930] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:28:10.552 [2024-07-22 18:35:22.537021] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:10.552 [2024-07-22 18:35:22.537047] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:10.552 [2024-07-22 18:35:22.537062] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:10.552 [2024-07-22 18:35:22.537076] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:28:10.552 [2024-07-22 18:35:22.537112] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:10.552 [2024-07-22 18:35:22.537128] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:10.552 [2024-07-22 18:35:22.537141] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:28:10.552 [2024-07-22 18:35:22.537164] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.552 [2024-07-22 18:35:22.537225] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:28:10.552 [2024-07-22 18:35:22.537312] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:10.552 [2024-07-22 18:35:22.537339] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:10.552 [2024-07-22 18:35:22.537354] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:10.552 [2024-07-22 18:35:22.537368] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:28:10.552 [2024-07-22 18:35:22.537404] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:10.552 [2024-07-22 18:35:22.537425] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:10.552 [2024-07-22 18:35:22.537439] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:28:10.552 [2024-07-22 18:35:22.537464] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.552 [2024-07-22 18:35:22.537516] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:28:10.552 [2024-07-22 18:35:22.537591] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:10.552 [2024-07-22 18:35:22.537628] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:10.552 [2024-07-22 18:35:22.537644] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:10.552 [2024-07-22 18:35:22.537658] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:28:10.552 [2024-07-22 18:35:22.537697] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:10.552 [2024-07-22 18:35:22.537715] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:10.552 [2024-07-22 18:35:22.537728] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:28:10.552 [2024-07-22 18:35:22.537752] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.552 [2024-07-22 18:35:22.537801] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:28:10.552 [2024-07-22 18:35:22.537896] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:10.552 [2024-07-22 18:35:22.537926] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:10.552 [2024-07-22 18:35:22.537940] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:10.552 [2024-07-22 18:35:22.537955] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:28:10.552 [2024-07-22 18:35:22.537992] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:10.552 [2024-07-22 18:35:22.538012] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:10.552 [2024-07-22 18:35:22.538027] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:28:10.552 [2024-07-22 18:35:22.538066] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.552 [2024-07-22 18:35:22.538119] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:28:10.552 [2024-07-22 18:35:22.538202] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:10.552 [2024-07-22 18:35:22.538219] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:10.552 [2024-07-22 18:35:22.538226] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:10.552 [2024-07-22 18:35:22.538234] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:28:10.552 [2024-07-22 18:35:22.538259] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:10.552 [2024-07-22 18:35:22.538270] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:10.552 [2024-07-22 18:35:22.538277] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:28:10.552 [2024-07-22 18:35:22.538293] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.552 [2024-07-22 18:35:22.538327] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:28:10.552 [2024-07-22 18:35:22.538412] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:10.552 [2024-07-22 18:35:22.538424] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:10.552 [2024-07-22 18:35:22.538431] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:10.552 [2024-07-22 18:35:22.538439] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:28:10.552 [2024-07-22 18:35:22.538458] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:10.552 [2024-07-22 18:35:22.538468] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:10.552 [2024-07-22 18:35:22.538475] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:28:10.552 [2024-07-22 18:35:22.538490] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.552 [2024-07-22 18:35:22.538520] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:28:10.552 [2024-07-22 18:35:22.538599] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:10.552 [2024-07-22 18:35:22.538611] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:10.552 [2024-07-22 18:35:22.538618] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:10.552 [2024-07-22 18:35:22.538625] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:28:10.552 [2024-07-22 18:35:22.538644] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:10.552 [2024-07-22 18:35:22.538653] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:10.552 [2024-07-22 18:35:22.538660] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:28:10.552 [2024-07-22 18:35:22.538682] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.552 [2024-07-22 18:35:22.538712] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:28:10.552 [2024-07-22 18:35:22.538800] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:10.552 [2024-07-22 18:35:22.538812] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:10.552 [2024-07-22 18:35:22.538818] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:10.552 [2024-07-22 18:35:22.538826] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:28:10.552 [2024-07-22 18:35:22.542882] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:10.552 [2024-07-22 18:35:22.542900] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:10.552 [2024-07-22 18:35:22.542908] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:28:10.552 [2024-07-22 18:35:22.542924] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.552 [2024-07-22 18:35:22.542963] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:28:10.552 [2024-07-22 18:35:22.543064] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:10.552 [2024-07-22 18:35:22.543076] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:10.552 [2024-07-22 18:35:22.543083] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:10.552 [2024-07-22 18:35:22.543091] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:28:10.552 [2024-07-22 18:35:22.543106] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 6 milliseconds 00:28:10.810 00:28:10.810 18:35:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:28:10.810 [2024-07-22 18:35:22.661641] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:28:10.810 [2024-07-22 18:35:22.661781] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97097 ] 00:28:11.072 [2024-07-22 18:35:22.841765] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:28:11.072 [2024-07-22 18:35:22.841967] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:28:11.072 [2024-07-22 18:35:22.841989] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:28:11.072 [2024-07-22 18:35:22.842024] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:28:11.072 [2024-07-22 18:35:22.842056] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:28:11.072 [2024-07-22 18:35:22.842310] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:28:11.072 [2024-07-22 18:35:22.842403] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x61500000f080 0 00:28:11.072 [2024-07-22 18:35:22.848872] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:28:11.072 [2024-07-22 18:35:22.848914] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:28:11.072 [2024-07-22 18:35:22.848930] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:28:11.072 [2024-07-22 18:35:22.848942] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:28:11.072 [2024-07-22 18:35:22.849064] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:11.072 [2024-07-22 18:35:22.849081] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:11.072 [2024-07-22 18:35:22.849095] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:28:11.072 [2024-07-22 18:35:22.849132] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:28:11.072 [2024-07-22 18:35:22.849188] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:28:11.072 [2024-07-22 18:35:22.856871] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:11.072 [2024-07-22 18:35:22.856907] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:11.072 [2024-07-22 18:35:22.856918] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:11.072 [2024-07-22 18:35:22.856934] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:28:11.072 [2024-07-22 18:35:22.856955] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:28:11.072 [2024-07-22 18:35:22.856974] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:28:11.072 [2024-07-22 18:35:22.856987] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:28:11.072 [2024-07-22 18:35:22.857019] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:11.072 [2024-07-22 18:35:22.857029] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:11.072 [2024-07-22 18:35:22.857037] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:28:11.072 [2024-07-22 18:35:22.857056] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.072 [2024-07-22 18:35:22.857103] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:28:11.072 [2024-07-22 18:35:22.857488] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:11.072 [2024-07-22 18:35:22.857517] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:11.072 [2024-07-22 18:35:22.857527] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:11.072 [2024-07-22 18:35:22.857535] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:28:11.072 [2024-07-22 18:35:22.857548] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:28:11.072 [2024-07-22 18:35:22.857563] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:28:11.072 [2024-07-22 18:35:22.857579] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:11.072 [2024-07-22 18:35:22.857587] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:11.072 [2024-07-22 18:35:22.857595] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:28:11.072 [2024-07-22 18:35:22.857618] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.072 [2024-07-22 18:35:22.857654] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:28:11.072 [2024-07-22 18:35:22.858109] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:11.072 [2024-07-22 18:35:22.858138] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:11.072 [2024-07-22 18:35:22.858147] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:11.072 [2024-07-22 18:35:22.858154] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:28:11.072 [2024-07-22 18:35:22.858167] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:28:11.072 [2024-07-22 18:35:22.858184] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:28:11.072 [2024-07-22 18:35:22.858198] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:11.072 [2024-07-22 18:35:22.858207] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:11.072 [2024-07-22 18:35:22.858215] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:28:11.072 [2024-07-22 18:35:22.858230] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.072 [2024-07-22 18:35:22.858265] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:28:11.072 [2024-07-22 18:35:22.858546] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:11.072 [2024-07-22 18:35:22.858576] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:11.072 [2024-07-22 18:35:22.858585] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:11.072 [2024-07-22 18:35:22.858592] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:28:11.072 [2024-07-22 18:35:22.858604] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:28:11.072 [2024-07-22 18:35:22.858624] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:11.072 [2024-07-22 18:35:22.858638] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:11.072 [2024-07-22 18:35:22.858646] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:28:11.072 [2024-07-22 18:35:22.858662] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.072 [2024-07-22 18:35:22.858693] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:28:11.072 [2024-07-22 18:35:22.859129] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:11.072 [2024-07-22 18:35:22.859157] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:11.072 [2024-07-22 18:35:22.859172] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:11.072 [2024-07-22 18:35:22.859180] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:28:11.072 [2024-07-22 18:35:22.859190] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:28:11.072 [2024-07-22 18:35:22.859201] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:28:11.072 [2024-07-22 18:35:22.859216] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:28:11.072 [2024-07-22 18:35:22.859327] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:28:11.072 [2024-07-22 18:35:22.859336] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:28:11.072 [2024-07-22 18:35:22.859354] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:11.072 [2024-07-22 18:35:22.859363] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:11.072 [2024-07-22 18:35:22.859376] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:28:11.072 [2024-07-22 18:35:22.859391] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.072 [2024-07-22 18:35:22.859425] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:28:11.072 [2024-07-22 18:35:22.859723] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:11.072 [2024-07-22 18:35:22.859746] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:11.072 [2024-07-22 18:35:22.859754] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:11.072 [2024-07-22 18:35:22.859762] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:28:11.072 [2024-07-22 18:35:22.859781] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:28:11.072 [2024-07-22 18:35:22.859801] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:11.072 [2024-07-22 18:35:22.859811] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:11.073 [2024-07-22 18:35:22.859819] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:28:11.073 [2024-07-22 18:35:22.859847] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.073 [2024-07-22 18:35:22.859883] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:28:11.073 [2024-07-22 18:35:22.860249] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:11.073 [2024-07-22 18:35:22.860272] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:11.073 [2024-07-22 18:35:22.860279] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:11.073 [2024-07-22 18:35:22.860287] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:28:11.073 [2024-07-22 18:35:22.860297] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:28:11.073 [2024-07-22 18:35:22.860315] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:28:11.073 [2024-07-22 18:35:22.860331] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:28:11.073 [2024-07-22 18:35:22.860352] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:28:11.073 [2024-07-22 18:35:22.860373] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:11.073 [2024-07-22 18:35:22.860382] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:28:11.073 [2024-07-22 18:35:22.860398] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.073 [2024-07-22 18:35:22.860449] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:28:11.073 [2024-07-22 18:35:22.864863] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:11.073 [2024-07-22 18:35:22.864894] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:11.073 [2024-07-22 18:35:22.864904] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:11.073 [2024-07-22 18:35:22.864913] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=4096, cccid=0 00:28:11.073 [2024-07-22 18:35:22.864923] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b100) on tqpair(0x61500000f080): expected_datao=0, payload_size=4096 00:28:11.073 [2024-07-22 18:35:22.864932] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:11.073 [2024-07-22 18:35:22.864958] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:11.073 [2024-07-22 18:35:22.864968] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:11.073 [2024-07-22 18:35:22.864983] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:11.073 [2024-07-22 18:35:22.864993] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:11.073 [2024-07-22 18:35:22.864999] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:11.073 [2024-07-22 18:35:22.865007] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:28:11.073 [2024-07-22 18:35:22.865028] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:28:11.073 [2024-07-22 18:35:22.865039] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:28:11.073 [2024-07-22 18:35:22.865048] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:28:11.073 [2024-07-22 18:35:22.865063] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:28:11.073 [2024-07-22 18:35:22.865073] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:28:11.073 [2024-07-22 18:35:22.865082] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:28:11.073 [2024-07-22 18:35:22.865101] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:28:11.073 [2024-07-22 18:35:22.865122] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:11.073 [2024-07-22 18:35:22.865135] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:11.073 [2024-07-22 18:35:22.865143] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:28:11.073 [2024-07-22 18:35:22.865160] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:11.073 [2024-07-22 18:35:22.865198] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:28:11.073 [2024-07-22 18:35:22.865547] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:11.073 [2024-07-22 18:35:22.865574] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:11.073 [2024-07-22 18:35:22.865584] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:11.073 [2024-07-22 18:35:22.865592] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:28:11.073 [2024-07-22 18:35:22.865609] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:11.073 [2024-07-22 18:35:22.865618] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:11.073 [2024-07-22 18:35:22.865626] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:28:11.073 [2024-07-22 18:35:22.865659] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:11.073 [2024-07-22 18:35:22.865673] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:11.073 [2024-07-22 18:35:22.865681] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:11.073 [2024-07-22 18:35:22.865687] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x61500000f080) 00:28:11.073 [2024-07-22 18:35:22.865698] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:11.073 [2024-07-22 18:35:22.865712] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:11.073 [2024-07-22 18:35:22.865720] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:11.073 [2024-07-22 18:35:22.865726] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x61500000f080) 00:28:11.073 [2024-07-22 18:35:22.865737] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:11.073 [2024-07-22 18:35:22.865747] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:11.073 [2024-07-22 18:35:22.865754] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:11.073 [2024-07-22 18:35:22.865761] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:28:11.073 [2024-07-22 18:35:22.865771] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:11.073 [2024-07-22 18:35:22.865780] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:28:11.073 [2024-07-22 18:35:22.865801] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:28:11.073 [2024-07-22 18:35:22.865819] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:11.073 [2024-07-22 18:35:22.865827] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:28:11.073 [2024-07-22 18:35:22.865862] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.073 [2024-07-22 18:35:22.865903] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:28:11.073 [2024-07-22 18:35:22.865916] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b280, cid 1, qid 0 00:28:11.073 [2024-07-22 18:35:22.865924] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b400, cid 2, qid 0 00:28:11.073 [2024-07-22 18:35:22.865932] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:28:11.073 [2024-07-22 18:35:22.865939] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:28:11.073 [2024-07-22 18:35:22.866438] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:11.073 [2024-07-22 18:35:22.866476] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:11.073 [2024-07-22 18:35:22.866487] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:11.073 [2024-07-22 18:35:22.866495] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:28:11.073 [2024-07-22 18:35:22.866507] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:28:11.073 [2024-07-22 18:35:22.866518] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:28:11.073 [2024-07-22 18:35:22.866534] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:28:11.073 [2024-07-22 18:35:22.866561] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:28:11.073 [2024-07-22 18:35:22.866581] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:11.073 [2024-07-22 18:35:22.866591] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:11.073 [2024-07-22 18:35:22.866599] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:28:11.073 [2024-07-22 18:35:22.866618] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:11.073 [2024-07-22 18:35:22.866667] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:28:11.073 [2024-07-22 18:35:22.867014] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:11.073 [2024-07-22 18:35:22.867041] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:11.073 [2024-07-22 18:35:22.867050] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:11.073 [2024-07-22 18:35:22.867057] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:28:11.073 [2024-07-22 18:35:22.867159] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:28:11.073 [2024-07-22 18:35:22.867184] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:28:11.073 [2024-07-22 18:35:22.867208] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:11.073 [2024-07-22 18:35:22.867218] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:28:11.073 [2024-07-22 18:35:22.867234] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.073 [2024-07-22 18:35:22.867269] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:28:11.073 [2024-07-22 18:35:22.867590] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:11.073 [2024-07-22 18:35:22.867613] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:11.073 [2024-07-22 18:35:22.867622] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:11.073 [2024-07-22 18:35:22.867629] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=4096, cccid=4 00:28:11.074 [2024-07-22 18:35:22.867637] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=4096 00:28:11.074 [2024-07-22 18:35:22.867649] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:11.074 [2024-07-22 18:35:22.867663] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:11.074 [2024-07-22 18:35:22.867670] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:11.074 [2024-07-22 18:35:22.867687] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:11.074 [2024-07-22 18:35:22.867698] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:11.074 [2024-07-22 18:35:22.867704] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:11.074 [2024-07-22 18:35:22.867711] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:28:11.074 [2024-07-22 18:35:22.867762] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:28:11.074 [2024-07-22 18:35:22.867787] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:28:11.074 [2024-07-22 18:35:22.867816] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:28:11.074 [2024-07-22 18:35:22.867856] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:11.074 [2024-07-22 18:35:22.867868] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:28:11.074 [2024-07-22 18:35:22.867888] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.074 [2024-07-22 18:35:22.867924] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:28:11.074 [2024-07-22 18:35:22.868422] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:11.074 [2024-07-22 18:35:22.868444] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:11.074 [2024-07-22 18:35:22.868452] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:11.074 [2024-07-22 18:35:22.868459] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=4096, cccid=4 00:28:11.074 [2024-07-22 18:35:22.868467] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=4096 00:28:11.074 [2024-07-22 18:35:22.868474] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:11.074 [2024-07-22 18:35:22.868486] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:11.074 [2024-07-22 18:35:22.868494] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:11.074 [2024-07-22 18:35:22.868507] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:11.074 [2024-07-22 18:35:22.868517] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:11.074 [2024-07-22 18:35:22.868527] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:11.074 [2024-07-22 18:35:22.868535] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:28:11.074 [2024-07-22 18:35:22.868578] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:28:11.074 [2024-07-22 18:35:22.868605] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:28:11.074 [2024-07-22 18:35:22.868624] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:11.074 [2024-07-22 18:35:22.868633] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:28:11.074 [2024-07-22 18:35:22.868648] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.074 [2024-07-22 18:35:22.868681] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:28:11.074 [2024-07-22 18:35:22.872865] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:11.074 [2024-07-22 18:35:22.872894] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:11.074 [2024-07-22 18:35:22.872909] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:11.074 [2024-07-22 18:35:22.872917] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=4096, cccid=4 00:28:11.074 [2024-07-22 18:35:22.872925] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=4096 00:28:11.074 [2024-07-22 18:35:22.872933] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:11.074 [2024-07-22 18:35:22.872946] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:11.074 [2024-07-22 18:35:22.872954] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:11.074 [2024-07-22 18:35:22.872963] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:11.074 [2024-07-22 18:35:22.872973] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:11.074 [2024-07-22 18:35:22.872979] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:11.074 [2024-07-22 18:35:22.872986] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:28:11.074 [2024-07-22 18:35:22.873024] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:28:11.074 [2024-07-22 18:35:22.873059] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:28:11.074 [2024-07-22 18:35:22.873083] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:28:11.074 [2024-07-22 18:35:22.873098] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:28:11.074 [2024-07-22 18:35:22.873107] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:28:11.074 [2024-07-22 18:35:22.873116] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:28:11.074 [2024-07-22 18:35:22.873126] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:28:11.074 [2024-07-22 18:35:22.873135] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:28:11.074 [2024-07-22 18:35:22.873144] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:28:11.074 [2024-07-22 18:35:22.873200] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:11.074 [2024-07-22 18:35:22.873212] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:28:11.074 [2024-07-22 18:35:22.873228] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.074 [2024-07-22 18:35:22.873242] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:11.074 [2024-07-22 18:35:22.873250] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:11.074 [2024-07-22 18:35:22.873262] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500000f080) 00:28:11.074 [2024-07-22 18:35:22.873278] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:28:11.074 [2024-07-22 18:35:22.873320] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:28:11.074 [2024-07-22 18:35:22.873334] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:28:11.074 [2024-07-22 18:35:22.873692] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:11.074 [2024-07-22 18:35:22.873716] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:11.074 [2024-07-22 18:35:22.873726] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:11.074 [2024-07-22 18:35:22.873734] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:28:11.074 [2024-07-22 18:35:22.873748] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:11.074 [2024-07-22 18:35:22.873758] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:11.074 [2024-07-22 18:35:22.873764] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:11.074 [2024-07-22 18:35:22.873771] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500000f080 00:28:11.074 [2024-07-22 18:35:22.873789] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:11.074 [2024-07-22 18:35:22.873798] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500000f080) 00:28:11.074 [2024-07-22 18:35:22.873816] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.074 [2024-07-22 18:35:22.873870] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:28:11.074 [2024-07-22 18:35:22.874211] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:11.074 [2024-07-22 18:35:22.874237] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:11.074 [2024-07-22 18:35:22.874245] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:11.074 [2024-07-22 18:35:22.874257] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500000f080 00:28:11.074 [2024-07-22 18:35:22.874277] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:11.074 [2024-07-22 18:35:22.874286] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500000f080) 00:28:11.074 [2024-07-22 18:35:22.874299] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.074 [2024-07-22 18:35:22.874330] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:28:11.074 [2024-07-22 18:35:22.874759] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:11.074 [2024-07-22 18:35:22.874781] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:11.074 [2024-07-22 18:35:22.874789] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:11.074 [2024-07-22 18:35:22.874796] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500000f080 00:28:11.074 [2024-07-22 18:35:22.874814] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:11.074 [2024-07-22 18:35:22.874822] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500000f080) 00:28:11.074 [2024-07-22 18:35:22.874853] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.074 [2024-07-22 18:35:22.874888] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:28:11.074 [2024-07-22 18:35:22.875192] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:11.074 [2024-07-22 18:35:22.875213] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:11.074 [2024-07-22 18:35:22.875221] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:11.074 [2024-07-22 18:35:22.875229] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500000f080 00:28:11.074 [2024-07-22 18:35:22.875270] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:11.074 [2024-07-22 18:35:22.875281] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500000f080) 00:28:11.074 [2024-07-22 18:35:22.875296] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.075 [2024-07-22 18:35:22.875310] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:11.075 [2024-07-22 18:35:22.875324] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:28:11.075 [2024-07-22 18:35:22.875340] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.075 [2024-07-22 18:35:22.875354] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:11.075 [2024-07-22 18:35:22.875362] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x61500000f080) 00:28:11.075 [2024-07-22 18:35:22.875378] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.075 [2024-07-22 18:35:22.875396] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:11.075 [2024-07-22 18:35:22.875404] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x61500000f080) 00:28:11.075 [2024-07-22 18:35:22.875416] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.075 [2024-07-22 18:35:22.875449] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:28:11.075 [2024-07-22 18:35:22.875462] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:28:11.075 [2024-07-22 18:35:22.875470] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001ba00, cid 6, qid 0 00:28:11.075 [2024-07-22 18:35:22.875477] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001bb80, cid 7, qid 0 00:28:11.075 [2024-07-22 18:35:22.876049] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:11.075 [2024-07-22 18:35:22.876075] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:11.075 [2024-07-22 18:35:22.876088] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:11.075 [2024-07-22 18:35:22.876096] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=8192, cccid=5 00:28:11.075 [2024-07-22 18:35:22.876105] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b880) on tqpair(0x61500000f080): expected_datao=0, payload_size=8192 00:28:11.075 [2024-07-22 18:35:22.876113] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:11.075 [2024-07-22 18:35:22.876142] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:11.075 [2024-07-22 18:35:22.876152] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:11.075 [2024-07-22 18:35:22.876169] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:11.075 [2024-07-22 18:35:22.876185] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:11.075 [2024-07-22 18:35:22.876192] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:11.075 [2024-07-22 18:35:22.876199] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=512, cccid=4 00:28:11.075 [2024-07-22 18:35:22.876207] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=512 00:28:11.075 [2024-07-22 18:35:22.876214] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:11.075 [2024-07-22 18:35:22.876225] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:11.075 [2024-07-22 18:35:22.876232] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:11.075 [2024-07-22 18:35:22.876241] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:11.075 [2024-07-22 18:35:22.876253] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:11.075 [2024-07-22 18:35:22.876260] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:11.075 [2024-07-22 18:35:22.876269] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=512, cccid=6 00:28:11.075 [2024-07-22 18:35:22.876277] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001ba00) on tqpair(0x61500000f080): expected_datao=0, payload_size=512 00:28:11.075 [2024-07-22 18:35:22.876284] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:11.075 [2024-07-22 18:35:22.876298] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:11.075 [2024-07-22 18:35:22.876304] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:11.075 [2024-07-22 18:35:22.876313] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:11.075 [2024-07-22 18:35:22.876322] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:11.075 [2024-07-22 18:35:22.876328] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:11.075 [2024-07-22 18:35:22.876335] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=4096, cccid=7 00:28:11.075 [2024-07-22 18:35:22.876342] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001bb80) on tqpair(0x61500000f080): expected_datao=0, payload_size=4096 00:28:11.075 [2024-07-22 18:35:22.876349] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:11.075 [2024-07-22 18:35:22.876364] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:11.075 [2024-07-22 18:35:22.876371] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:11.075 [2024-07-22 18:35:22.876380] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:11.075 [2024-07-22 18:35:22.876389] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:11.075 [2024-07-22 18:35:22.876395] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:11.075 [2024-07-22 18:35:22.876403] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500000f080 00:28:11.075 [2024-07-22 18:35:22.876432] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:11.075 [2024-07-22 18:35:22.876443] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:11.075 [2024-07-22 18:35:22.876449] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:11.075 [2024-07-22 18:35:22.876456] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:28:11.075 [2024-07-22 18:35:22.876479] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:11.075 [2024-07-22 18:35:22.876489] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:11.075 [2024-07-22 18:35:22.876495] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:11.075 [2024-07-22 18:35:22.876501] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001ba00) on tqpair=0x61500000f080 00:28:11.075 [2024-07-22 18:35:22.876514] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:11.075 [2024-07-22 18:35:22.876524] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:11.075 [2024-07-22 18:35:22.876530] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:11.075 [2024-07-22 18:35:22.876536] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001bb80) on tqpair=0x61500000f080 00:28:11.075 ===================================================== 00:28:11.075 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:11.075 ===================================================== 00:28:11.075 Controller Capabilities/Features 00:28:11.075 ================================ 00:28:11.075 Vendor ID: 8086 00:28:11.075 Subsystem Vendor ID: 8086 00:28:11.075 Serial Number: SPDK00000000000001 00:28:11.075 Model Number: SPDK bdev Controller 00:28:11.075 Firmware Version: 24.09 00:28:11.075 Recommended Arb Burst: 6 00:28:11.075 IEEE OUI Identifier: e4 d2 5c 00:28:11.075 Multi-path I/O 00:28:11.075 May have multiple subsystem ports: Yes 00:28:11.075 May have multiple controllers: Yes 00:28:11.075 Associated with SR-IOV VF: No 00:28:11.075 Max Data Transfer Size: 131072 00:28:11.075 Max Number of Namespaces: 32 00:28:11.075 Max Number of I/O Queues: 127 00:28:11.075 NVMe Specification Version (VS): 1.3 00:28:11.075 NVMe Specification Version (Identify): 1.3 00:28:11.075 Maximum Queue Entries: 128 00:28:11.075 Contiguous Queues Required: Yes 00:28:11.075 Arbitration Mechanisms Supported 00:28:11.075 Weighted Round Robin: Not Supported 00:28:11.075 Vendor Specific: Not Supported 00:28:11.075 Reset Timeout: 15000 ms 00:28:11.075 Doorbell Stride: 4 bytes 00:28:11.075 NVM Subsystem Reset: Not Supported 00:28:11.075 Command Sets Supported 00:28:11.075 NVM Command Set: Supported 00:28:11.075 Boot Partition: Not Supported 00:28:11.075 Memory Page Size Minimum: 4096 bytes 00:28:11.075 Memory Page Size Maximum: 4096 bytes 00:28:11.075 Persistent Memory Region: Not Supported 00:28:11.075 Optional Asynchronous Events Supported 00:28:11.075 Namespace Attribute Notices: Supported 00:28:11.075 Firmware Activation Notices: Not Supported 00:28:11.075 ANA Change Notices: Not Supported 00:28:11.075 PLE Aggregate Log Change Notices: Not Supported 00:28:11.075 LBA Status Info Alert Notices: Not Supported 00:28:11.075 EGE Aggregate Log Change Notices: Not Supported 00:28:11.075 Normal NVM Subsystem Shutdown event: Not Supported 00:28:11.075 Zone Descriptor Change Notices: Not Supported 00:28:11.075 Discovery Log Change Notices: Not Supported 00:28:11.075 Controller Attributes 00:28:11.075 128-bit Host Identifier: Supported 00:28:11.075 Non-Operational Permissive Mode: Not Supported 00:28:11.075 NVM Sets: Not Supported 00:28:11.075 Read Recovery Levels: Not Supported 00:28:11.075 Endurance Groups: Not Supported 00:28:11.075 Predictable Latency Mode: Not Supported 00:28:11.075 Traffic Based Keep ALive: Not Supported 00:28:11.075 Namespace Granularity: Not Supported 00:28:11.075 SQ Associations: Not Supported 00:28:11.075 UUID List: Not Supported 00:28:11.075 Multi-Domain Subsystem: Not Supported 00:28:11.075 Fixed Capacity Management: Not Supported 00:28:11.075 Variable Capacity Management: Not Supported 00:28:11.075 Delete Endurance Group: Not Supported 00:28:11.075 Delete NVM Set: Not Supported 00:28:11.075 Extended LBA Formats Supported: Not Supported 00:28:11.075 Flexible Data Placement Supported: Not Supported 00:28:11.075 00:28:11.075 Controller Memory Buffer Support 00:28:11.075 ================================ 00:28:11.075 Supported: No 00:28:11.075 00:28:11.075 Persistent Memory Region Support 00:28:11.075 ================================ 00:28:11.075 Supported: No 00:28:11.075 00:28:11.075 Admin Command Set Attributes 00:28:11.075 ============================ 00:28:11.075 Security Send/Receive: Not Supported 00:28:11.075 Format NVM: Not Supported 00:28:11.075 Firmware Activate/Download: Not Supported 00:28:11.076 Namespace Management: Not Supported 00:28:11.076 Device Self-Test: Not Supported 00:28:11.076 Directives: Not Supported 00:28:11.076 NVMe-MI: Not Supported 00:28:11.076 Virtualization Management: Not Supported 00:28:11.076 Doorbell Buffer Config: Not Supported 00:28:11.076 Get LBA Status Capability: Not Supported 00:28:11.076 Command & Feature Lockdown Capability: Not Supported 00:28:11.076 Abort Command Limit: 4 00:28:11.076 Async Event Request Limit: 4 00:28:11.076 Number of Firmware Slots: N/A 00:28:11.076 Firmware Slot 1 Read-Only: N/A 00:28:11.076 Firmware Activation Without Reset: N/A 00:28:11.076 Multiple Update Detection Support: N/A 00:28:11.076 Firmware Update Granularity: No Information Provided 00:28:11.076 Per-Namespace SMART Log: No 00:28:11.076 Asymmetric Namespace Access Log Page: Not Supported 00:28:11.076 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:28:11.076 Command Effects Log Page: Supported 00:28:11.076 Get Log Page Extended Data: Supported 00:28:11.076 Telemetry Log Pages: Not Supported 00:28:11.076 Persistent Event Log Pages: Not Supported 00:28:11.076 Supported Log Pages Log Page: May Support 00:28:11.076 Commands Supported & Effects Log Page: Not Supported 00:28:11.076 Feature Identifiers & Effects Log Page:May Support 00:28:11.076 NVMe-MI Commands & Effects Log Page: May Support 00:28:11.076 Data Area 4 for Telemetry Log: Not Supported 00:28:11.076 Error Log Page Entries Supported: 128 00:28:11.076 Keep Alive: Supported 00:28:11.076 Keep Alive Granularity: 10000 ms 00:28:11.076 00:28:11.076 NVM Command Set Attributes 00:28:11.076 ========================== 00:28:11.076 Submission Queue Entry Size 00:28:11.076 Max: 64 00:28:11.076 Min: 64 00:28:11.076 Completion Queue Entry Size 00:28:11.076 Max: 16 00:28:11.076 Min: 16 00:28:11.076 Number of Namespaces: 32 00:28:11.076 Compare Command: Supported 00:28:11.076 Write Uncorrectable Command: Not Supported 00:28:11.076 Dataset Management Command: Supported 00:28:11.076 Write Zeroes Command: Supported 00:28:11.076 Set Features Save Field: Not Supported 00:28:11.076 Reservations: Supported 00:28:11.076 Timestamp: Not Supported 00:28:11.076 Copy: Supported 00:28:11.076 Volatile Write Cache: Present 00:28:11.076 Atomic Write Unit (Normal): 1 00:28:11.076 Atomic Write Unit (PFail): 1 00:28:11.076 Atomic Compare & Write Unit: 1 00:28:11.076 Fused Compare & Write: Supported 00:28:11.076 Scatter-Gather List 00:28:11.076 SGL Command Set: Supported 00:28:11.076 SGL Keyed: Supported 00:28:11.076 SGL Bit Bucket Descriptor: Not Supported 00:28:11.076 SGL Metadata Pointer: Not Supported 00:28:11.076 Oversized SGL: Not Supported 00:28:11.076 SGL Metadata Address: Not Supported 00:28:11.076 SGL Offset: Supported 00:28:11.076 Transport SGL Data Block: Not Supported 00:28:11.076 Replay Protected Memory Block: Not Supported 00:28:11.076 00:28:11.076 Firmware Slot Information 00:28:11.076 ========================= 00:28:11.076 Active slot: 1 00:28:11.076 Slot 1 Firmware Revision: 24.09 00:28:11.076 00:28:11.076 00:28:11.076 Commands Supported and Effects 00:28:11.076 ============================== 00:28:11.076 Admin Commands 00:28:11.076 -------------- 00:28:11.076 Get Log Page (02h): Supported 00:28:11.076 Identify (06h): Supported 00:28:11.076 Abort (08h): Supported 00:28:11.076 Set Features (09h): Supported 00:28:11.076 Get Features (0Ah): Supported 00:28:11.076 Asynchronous Event Request (0Ch): Supported 00:28:11.076 Keep Alive (18h): Supported 00:28:11.076 I/O Commands 00:28:11.076 ------------ 00:28:11.076 Flush (00h): Supported LBA-Change 00:28:11.076 Write (01h): Supported LBA-Change 00:28:11.076 Read (02h): Supported 00:28:11.076 Compare (05h): Supported 00:28:11.076 Write Zeroes (08h): Supported LBA-Change 00:28:11.076 Dataset Management (09h): Supported LBA-Change 00:28:11.076 Copy (19h): Supported LBA-Change 00:28:11.076 00:28:11.076 Error Log 00:28:11.076 ========= 00:28:11.076 00:28:11.076 Arbitration 00:28:11.076 =========== 00:28:11.076 Arbitration Burst: 1 00:28:11.076 00:28:11.076 Power Management 00:28:11.076 ================ 00:28:11.076 Number of Power States: 1 00:28:11.076 Current Power State: Power State #0 00:28:11.076 Power State #0: 00:28:11.076 Max Power: 0.00 W 00:28:11.076 Non-Operational State: Operational 00:28:11.076 Entry Latency: Not Reported 00:28:11.076 Exit Latency: Not Reported 00:28:11.076 Relative Read Throughput: 0 00:28:11.076 Relative Read Latency: 0 00:28:11.076 Relative Write Throughput: 0 00:28:11.076 Relative Write Latency: 0 00:28:11.076 Idle Power: Not Reported 00:28:11.076 Active Power: Not Reported 00:28:11.076 Non-Operational Permissive Mode: Not Supported 00:28:11.076 00:28:11.076 Health Information 00:28:11.076 ================== 00:28:11.076 Critical Warnings: 00:28:11.076 Available Spare Space: OK 00:28:11.076 Temperature: OK 00:28:11.076 Device Reliability: OK 00:28:11.076 Read Only: No 00:28:11.076 Volatile Memory Backup: OK 00:28:11.076 Current Temperature: 0 Kelvin (-273 Celsius) 00:28:11.076 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:28:11.076 Available Spare: 0% 00:28:11.076 Available Spare Threshold: 0% 00:28:11.076 Life Percentage Used:[2024-07-22 18:35:22.876745] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:11.076 [2024-07-22 18:35:22.876760] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x61500000f080) 00:28:11.076 [2024-07-22 18:35:22.876776] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.076 [2024-07-22 18:35:22.876823] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001bb80, cid 7, qid 0 00:28:11.076 [2024-07-22 18:35:22.880879] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:11.076 [2024-07-22 18:35:22.880897] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:11.076 [2024-07-22 18:35:22.880905] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:11.076 [2024-07-22 18:35:22.880914] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001bb80) on tqpair=0x61500000f080 00:28:11.076 [2024-07-22 18:35:22.881023] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:28:11.076 [2024-07-22 18:35:22.881052] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:28:11.076 [2024-07-22 18:35:22.881069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.076 [2024-07-22 18:35:22.881080] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b280) on tqpair=0x61500000f080 00:28:11.076 [2024-07-22 18:35:22.881089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.076 [2024-07-22 18:35:22.881098] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b400) on tqpair=0x61500000f080 00:28:11.076 [2024-07-22 18:35:22.881120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.076 [2024-07-22 18:35:22.881133] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:28:11.076 [2024-07-22 18:35:22.881142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.077 [2024-07-22 18:35:22.881159] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:11.077 [2024-07-22 18:35:22.881169] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:11.077 [2024-07-22 18:35:22.881176] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:28:11.077 [2024-07-22 18:35:22.881197] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.077 [2024-07-22 18:35:22.881238] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:28:11.077 [2024-07-22 18:35:22.881567] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:11.077 [2024-07-22 18:35:22.881589] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:11.077 [2024-07-22 18:35:22.881602] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:11.077 [2024-07-22 18:35:22.881611] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:28:11.077 [2024-07-22 18:35:22.881627] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:11.077 [2024-07-22 18:35:22.881636] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:11.077 [2024-07-22 18:35:22.881643] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:28:11.077 [2024-07-22 18:35:22.881662] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.077 [2024-07-22 18:35:22.881701] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:28:11.077 [2024-07-22 18:35:22.882134] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:11.077 [2024-07-22 18:35:22.882159] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:11.077 [2024-07-22 18:35:22.882167] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:11.077 [2024-07-22 18:35:22.882175] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:28:11.077 [2024-07-22 18:35:22.882186] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:28:11.077 [2024-07-22 18:35:22.882196] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:28:11.077 [2024-07-22 18:35:22.882220] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:11.077 [2024-07-22 18:35:22.882230] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:11.077 [2024-07-22 18:35:22.882238] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:28:11.077 [2024-07-22 18:35:22.882253] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.077 [2024-07-22 18:35:22.882287] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:28:11.077 [2024-07-22 18:35:22.882553] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:11.077 [2024-07-22 18:35:22.882577] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:11.077 [2024-07-22 18:35:22.882584] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:11.077 [2024-07-22 18:35:22.882592] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:28:11.077 [2024-07-22 18:35:22.882612] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:11.077 [2024-07-22 18:35:22.882621] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:11.077 [2024-07-22 18:35:22.882628] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:28:11.077 [2024-07-22 18:35:22.882649] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.077 [2024-07-22 18:35:22.882681] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:28:11.077 [2024-07-22 18:35:22.882951] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:11.077 [2024-07-22 18:35:22.882978] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:11.077 [2024-07-22 18:35:22.882986] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:11.077 [2024-07-22 18:35:22.882994] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:28:11.077 [2024-07-22 18:35:22.883013] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:11.077 [2024-07-22 18:35:22.883022] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:11.077 [2024-07-22 18:35:22.883029] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:28:11.077 [2024-07-22 18:35:22.883043] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.077 [2024-07-22 18:35:22.883074] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:28:11.077 [2024-07-22 18:35:22.883314] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:11.077 [2024-07-22 18:35:22.883335] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:11.077 [2024-07-22 18:35:22.883343] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:11.077 [2024-07-22 18:35:22.883351] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:28:11.077 [2024-07-22 18:35:22.883371] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:11.077 [2024-07-22 18:35:22.883385] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:11.077 [2024-07-22 18:35:22.883392] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:28:11.077 [2024-07-22 18:35:22.883409] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.077 [2024-07-22 18:35:22.883440] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:28:11.077 [2024-07-22 18:35:22.883688] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:11.077 [2024-07-22 18:35:22.883709] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:11.077 [2024-07-22 18:35:22.883717] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:11.077 [2024-07-22 18:35:22.883725] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:28:11.077 [2024-07-22 18:35:22.883744] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:11.077 [2024-07-22 18:35:22.883752] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:11.077 [2024-07-22 18:35:22.883759] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:28:11.077 [2024-07-22 18:35:22.883772] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.077 [2024-07-22 18:35:22.883802] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:28:11.077 [2024-07-22 18:35:22.884060] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:11.077 [2024-07-22 18:35:22.884082] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:11.077 [2024-07-22 18:35:22.884090] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:11.077 [2024-07-22 18:35:22.884097] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:28:11.077 [2024-07-22 18:35:22.884121] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:11.077 [2024-07-22 18:35:22.884130] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:11.077 [2024-07-22 18:35:22.884136] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:28:11.077 [2024-07-22 18:35:22.884150] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.077 [2024-07-22 18:35:22.884181] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:28:11.077 [2024-07-22 18:35:22.884399] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:11.077 [2024-07-22 18:35:22.884417] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:11.077 [2024-07-22 18:35:22.884424] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:11.077 [2024-07-22 18:35:22.884431] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:28:11.077 [2024-07-22 18:35:22.884450] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:11.077 [2024-07-22 18:35:22.884459] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:11.077 [2024-07-22 18:35:22.884465] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:28:11.077 [2024-07-22 18:35:22.884479] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.077 [2024-07-22 18:35:22.884508] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:28:11.077 [2024-07-22 18:35:22.884735] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:11.077 [2024-07-22 18:35:22.884753] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:11.077 [2024-07-22 18:35:22.884760] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:11.077 [2024-07-22 18:35:22.884767] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:28:11.077 [2024-07-22 18:35:22.884790] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:11.077 [2024-07-22 18:35:22.884800] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:11.077 [2024-07-22 18:35:22.884806] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:28:11.077 [2024-07-22 18:35:22.884819] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.077 [2024-07-22 18:35:22.888872] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:28:11.077 [2024-07-22 18:35:22.888970] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:11.077 [2024-07-22 18:35:22.888994] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:11.077 [2024-07-22 18:35:22.889002] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:11.077 [2024-07-22 18:35:22.889009] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:28:11.077 [2024-07-22 18:35:22.889026] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 6 milliseconds 00:28:11.077 0% 00:28:11.077 Data Units Read: 0 00:28:11.077 Data Units Written: 0 00:28:11.077 Host Read Commands: 0 00:28:11.077 Host Write Commands: 0 00:28:11.077 Controller Busy Time: 0 minutes 00:28:11.077 Power Cycles: 0 00:28:11.077 Power On Hours: 0 hours 00:28:11.077 Unsafe Shutdowns: 0 00:28:11.077 Unrecoverable Media Errors: 0 00:28:11.077 Lifetime Error Log Entries: 0 00:28:11.077 Warning Temperature Time: 0 minutes 00:28:11.077 Critical Temperature Time: 0 minutes 00:28:11.077 00:28:11.077 Number of Queues 00:28:11.077 ================ 00:28:11.077 Number of I/O Submission Queues: 127 00:28:11.077 Number of I/O Completion Queues: 127 00:28:11.077 00:28:11.077 Active Namespaces 00:28:11.077 ================= 00:28:11.077 Namespace ID:1 00:28:11.077 Error Recovery Timeout: Unlimited 00:28:11.077 Command Set Identifier: NVM (00h) 00:28:11.078 Deallocate: Supported 00:28:11.078 Deallocated/Unwritten Error: Not Supported 00:28:11.078 Deallocated Read Value: Unknown 00:28:11.078 Deallocate in Write Zeroes: Not Supported 00:28:11.078 Deallocated Guard Field: 0xFFFF 00:28:11.078 Flush: Supported 00:28:11.078 Reservation: Supported 00:28:11.078 Namespace Sharing Capabilities: Multiple Controllers 00:28:11.078 Size (in LBAs): 131072 (0GiB) 00:28:11.078 Capacity (in LBAs): 131072 (0GiB) 00:28:11.078 Utilization (in LBAs): 131072 (0GiB) 00:28:11.078 NGUID: ABCDEF0123456789ABCDEF0123456789 00:28:11.078 EUI64: ABCDEF0123456789 00:28:11.078 UUID: a9dcf368-6e43-417c-8b55-0f6a00fd82e2 00:28:11.078 Thin Provisioning: Not Supported 00:28:11.078 Per-NS Atomic Units: Yes 00:28:11.078 Atomic Boundary Size (Normal): 0 00:28:11.078 Atomic Boundary Size (PFail): 0 00:28:11.078 Atomic Boundary Offset: 0 00:28:11.078 Maximum Single Source Range Length: 65535 00:28:11.078 Maximum Copy Length: 65535 00:28:11.078 Maximum Source Range Count: 1 00:28:11.078 NGUID/EUI64 Never Reused: No 00:28:11.078 Namespace Write Protected: No 00:28:11.078 Number of LBA Formats: 1 00:28:11.078 Current LBA Format: LBA Format #00 00:28:11.078 LBA Format #00: Data Size: 512 Metadata Size: 0 00:28:11.078 00:28:11.078 18:35:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:28:11.078 18:35:23 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:11.078 18:35:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.078 18:35:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:11.078 18:35:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.078 18:35:23 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:28:11.078 18:35:23 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:28:11.078 18:35:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:11.078 18:35:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:28:11.078 18:35:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:11.078 18:35:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:28:11.078 18:35:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:11.078 18:35:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:11.078 rmmod nvme_tcp 00:28:11.078 rmmod nvme_fabrics 00:28:11.078 rmmod nvme_keyring 00:28:11.078 18:35:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:11.078 18:35:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:28:11.078 18:35:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:28:11.078 18:35:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 97035 ']' 00:28:11.078 18:35:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 97035 00:28:11.078 18:35:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 97035 ']' 00:28:11.078 18:35:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 97035 00:28:11.078 18:35:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:28:11.078 18:35:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:11.078 18:35:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 97035 00:28:11.336 18:35:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:11.336 18:35:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:11.336 killing process with pid 97035 00:28:11.336 18:35:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 97035' 00:28:11.336 18:35:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@967 -- # kill 97035 00:28:11.336 18:35:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # wait 97035 00:28:12.757 18:35:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:12.757 18:35:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:12.757 18:35:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:12.757 18:35:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:12.757 18:35:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:12.757 18:35:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:12.757 18:35:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:12.757 18:35:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:12.757 18:35:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:28:12.757 00:28:12.757 real 0m4.270s 00:28:12.757 user 0m11.269s 00:28:12.757 sys 0m1.091s 00:28:12.757 18:35:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:12.757 ************************************ 00:28:12.757 END TEST nvmf_identify 00:28:12.757 ************************************ 00:28:12.757 18:35:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:12.757 18:35:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:28:12.757 18:35:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:28:12.757 18:35:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:12.757 18:35:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:12.757 18:35:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.757 ************************************ 00:28:12.757 START TEST nvmf_perf 00:28:12.757 ************************************ 00:28:12.757 18:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:28:13.015 * Looking for test storage... 00:28:13.015 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:28:13.015 18:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:13.015 18:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:28:13.015 18:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:13.015 18:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:13.015 18:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:13.015 18:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:13.015 18:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:13.015 18:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:13.015 18:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:13.015 18:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:13.015 18:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:13.015 18:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:13.015 18:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:28:13.015 18:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=0b8484e2-e129-4a11-8748-0b3c728771da 00:28:13.015 18:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:13.015 18:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:13.015 18:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:28:13.015 18:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:13.015 18:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:13.015 18:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:13.015 18:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:13.015 18:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:13.016 18:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:13.016 18:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:13.016 18:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:13.016 18:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:28:13.016 18:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:13.016 18:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:28:13.016 18:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:13.016 18:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:13.016 18:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:13.016 18:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:13.016 18:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:13.016 18:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:13.016 18:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:13.016 18:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:13.016 18:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:28:13.016 18:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:28:13.016 18:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:13.016 18:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:28:13.016 18:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:13.016 18:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:13.016 18:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:13.016 18:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:13.016 18:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:13.016 18:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:13.016 18:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:13.016 18:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:13.016 18:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:28:13.016 18:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:28:13.016 18:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:28:13.016 18:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:28:13.016 18:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:28:13.016 18:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # nvmf_veth_init 00:28:13.016 18:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:13.016 18:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:13.016 18:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:28:13.016 18:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:28:13.016 18:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:28:13.016 18:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:28:13.016 18:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:28:13.016 18:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:13.016 18:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:28:13.016 18:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:28:13.016 18:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:28:13.016 18:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:28:13.016 18:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:28:13.016 18:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:28:13.016 Cannot find device "nvmf_tgt_br" 00:28:13.016 18:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@155 -- # true 00:28:13.016 18:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:28:13.016 Cannot find device "nvmf_tgt_br2" 00:28:13.016 18:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # true 00:28:13.016 18:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:28:13.016 18:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:28:13.016 Cannot find device "nvmf_tgt_br" 00:28:13.016 18:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@158 -- # true 00:28:13.016 18:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:28:13.016 Cannot find device "nvmf_tgt_br2" 00:28:13.016 18:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # true 00:28:13.016 18:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:28:13.016 18:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:28:13.016 18:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:13.016 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:13.016 18:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # true 00:28:13.016 18:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:13.016 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:13.016 18:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # true 00:28:13.016 18:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:28:13.016 18:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:28:13.016 18:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:28:13.016 18:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:28:13.016 18:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:28:13.016 18:35:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:28:13.016 18:35:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:28:13.275 18:35:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:28:13.275 18:35:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:28:13.275 18:35:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:28:13.275 18:35:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:28:13.275 18:35:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:28:13.275 18:35:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:28:13.275 18:35:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:28:13.275 18:35:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:28:13.275 18:35:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:28:13.275 18:35:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:28:13.275 18:35:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:28:13.275 18:35:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:28:13.275 18:35:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:28:13.275 18:35:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:28:13.275 18:35:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:28:13.275 18:35:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:28:13.275 18:35:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:28:13.275 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:13.275 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.139 ms 00:28:13.275 00:28:13.275 --- 10.0.0.2 ping statistics --- 00:28:13.275 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:13.275 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:28:13.275 18:35:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:28:13.275 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:28:13.275 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:28:13.275 00:28:13.275 --- 10.0.0.3 ping statistics --- 00:28:13.275 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:13.275 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:28:13.275 18:35:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:28:13.275 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:13.275 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:28:13.275 00:28:13.275 --- 10.0.0.1 ping statistics --- 00:28:13.275 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:13.275 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:28:13.275 18:35:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:13.275 18:35:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@433 -- # return 0 00:28:13.275 18:35:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:13.275 18:35:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:13.275 18:35:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:13.275 18:35:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:13.275 18:35:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:13.275 18:35:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:13.275 18:35:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:13.275 18:35:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:28:13.275 18:35:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:13.275 18:35:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:13.275 18:35:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:13.275 18:35:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=97276 00:28:13.275 18:35:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:13.276 18:35:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 97276 00:28:13.276 18:35:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 97276 ']' 00:28:13.276 18:35:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:13.276 18:35:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:13.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:13.276 18:35:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:13.276 18:35:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:13.276 18:35:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:13.534 [2024-07-22 18:35:25.309731] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:28:13.534 [2024-07-22 18:35:25.309950] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:13.534 [2024-07-22 18:35:25.483790] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:13.792 [2024-07-22 18:35:25.766733] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:13.792 [2024-07-22 18:35:25.766867] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:13.792 [2024-07-22 18:35:25.766907] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:13.792 [2024-07-22 18:35:25.766940] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:13.792 [2024-07-22 18:35:25.766956] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:13.792 [2024-07-22 18:35:25.767257] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:13.792 [2024-07-22 18:35:25.768316] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:13.792 [2024-07-22 18:35:25.768553] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:13.792 [2024-07-22 18:35:25.768562] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:14.358 18:35:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:14.358 18:35:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:28:14.358 18:35:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:14.358 18:35:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:14.358 18:35:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:14.358 18:35:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:14.358 18:35:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:28:14.358 18:35:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:28:14.939 18:35:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:28:14.939 18:35:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:28:15.197 18:35:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:28:15.197 18:35:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:28:15.455 18:35:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:28:15.455 18:35:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:28:15.455 18:35:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:28:15.455 18:35:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:28:15.455 18:35:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:28:15.713 [2024-07-22 18:35:27.680126] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:15.713 18:35:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:15.972 18:35:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:28:15.973 18:35:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:16.231 18:35:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:28:16.231 18:35:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:28:16.488 18:35:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:16.746 [2024-07-22 18:35:28.702532] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:16.746 18:35:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:17.003 18:35:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:28:17.003 18:35:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:28:17.003 18:35:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:28:17.003 18:35:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:28:18.389 Initializing NVMe Controllers 00:28:18.389 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:28:18.389 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:28:18.389 Initialization complete. Launching workers. 00:28:18.389 ======================================================== 00:28:18.389 Latency(us) 00:28:18.390 Device Information : IOPS MiB/s Average min max 00:28:18.390 PCIE (0000:00:10.0) NSID 1 from core 0: 22333.19 87.24 1432.91 336.66 6866.47 00:28:18.390 ======================================================== 00:28:18.390 Total : 22333.19 87.24 1432.91 336.66 6866.47 00:28:18.390 00:28:18.390 18:35:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:19.762 Initializing NVMe Controllers 00:28:19.762 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:19.762 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:19.762 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:19.762 Initialization complete. Launching workers. 00:28:19.762 ======================================================== 00:28:19.762 Latency(us) 00:28:19.762 Device Information : IOPS MiB/s Average min max 00:28:19.762 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2420.29 9.45 412.63 171.12 5261.10 00:28:19.762 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 123.50 0.48 8159.78 6951.76 12051.91 00:28:19.762 ======================================================== 00:28:19.762 Total : 2543.80 9.94 788.77 171.12 12051.91 00:28:19.762 00:28:19.762 18:35:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:21.136 Initializing NVMe Controllers 00:28:21.136 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:21.136 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:21.136 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:21.136 Initialization complete. Launching workers. 00:28:21.136 ======================================================== 00:28:21.136 Latency(us) 00:28:21.136 Device Information : IOPS MiB/s Average min max 00:28:21.136 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5660.40 22.11 5653.15 1132.08 12587.50 00:28:21.136 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2670.59 10.43 12119.04 6796.22 35549.54 00:28:21.136 ======================================================== 00:28:21.136 Total : 8330.99 32.54 7725.86 1132.08 35549.54 00:28:21.136 00:28:21.136 18:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:28:21.136 18:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:24.418 Initializing NVMe Controllers 00:28:24.418 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:24.418 Controller IO queue size 128, less than required. 00:28:24.418 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:24.418 Controller IO queue size 128, less than required. 00:28:24.418 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:24.418 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:24.418 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:24.418 Initialization complete. Launching workers. 00:28:24.418 ======================================================== 00:28:24.418 Latency(us) 00:28:24.418 Device Information : IOPS MiB/s Average min max 00:28:24.418 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 936.80 234.20 145020.81 97033.91 348015.95 00:28:24.418 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 506.04 126.51 266460.73 144406.66 500928.47 00:28:24.418 ======================================================== 00:28:24.418 Total : 1442.85 360.71 187612.91 97033.91 500928.47 00:28:24.418 00:28:24.418 18:35:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:28:24.418 Initializing NVMe Controllers 00:28:24.418 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:24.418 Controller IO queue size 128, less than required. 00:28:24.418 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:24.418 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:28:24.418 Controller IO queue size 128, less than required. 00:28:24.418 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:24.418 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:28:24.418 WARNING: Some requested NVMe devices were skipped 00:28:24.418 No valid NVMe controllers or AIO or URING devices found 00:28:24.418 18:35:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:28:27.702 Initializing NVMe Controllers 00:28:27.702 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:27.702 Controller IO queue size 128, less than required. 00:28:27.702 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:27.702 Controller IO queue size 128, less than required. 00:28:27.702 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:27.702 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:27.702 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:27.702 Initialization complete. Launching workers. 00:28:27.702 00:28:27.702 ==================== 00:28:27.702 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:28:27.702 TCP transport: 00:28:27.702 polls: 4418 00:28:27.702 idle_polls: 2042 00:28:27.702 sock_completions: 2376 00:28:27.702 nvme_completions: 2859 00:28:27.702 submitted_requests: 4316 00:28:27.702 queued_requests: 1 00:28:27.702 00:28:27.702 ==================== 00:28:27.702 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:28:27.702 TCP transport: 00:28:27.702 polls: 6725 00:28:27.702 idle_polls: 4522 00:28:27.702 sock_completions: 2203 00:28:27.702 nvme_completions: 4503 00:28:27.702 submitted_requests: 6816 00:28:27.702 queued_requests: 1 00:28:27.702 ======================================================== 00:28:27.702 Latency(us) 00:28:27.702 Device Information : IOPS MiB/s Average min max 00:28:27.702 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 713.66 178.41 205879.12 116223.73 553376.92 00:28:27.702 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1124.17 281.04 114903.49 51338.45 351811.81 00:28:27.702 ======================================================== 00:28:27.702 Total : 1837.82 459.46 150230.71 51338.45 553376.92 00:28:27.702 00:28:27.702 18:35:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:28:27.702 18:35:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:27.960 18:35:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:28:27.960 18:35:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:00:10.0 ']' 00:28:27.960 18:35:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:28:28.219 18:35:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # ls_guid=53b817b5-fa7b-4eec-bd6a-2a2555b64e21 00:28:28.219 18:35:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 53b817b5-fa7b-4eec-bd6a-2a2555b64e21 00:28:28.219 18:35:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=53b817b5-fa7b-4eec-bd6a-2a2555b64e21 00:28:28.219 18:35:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:28:28.219 18:35:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:28:28.219 18:35:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:28:28.219 18:35:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:28.477 18:35:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:28:28.477 { 00:28:28.477 "base_bdev": "Nvme0n1", 00:28:28.477 "block_size": 4096, 00:28:28.477 "cluster_size": 4194304, 00:28:28.477 "free_clusters": 1278, 00:28:28.477 "name": "lvs_0", 00:28:28.477 "total_data_clusters": 1278, 00:28:28.477 "uuid": "53b817b5-fa7b-4eec-bd6a-2a2555b64e21" 00:28:28.477 } 00:28:28.477 ]' 00:28:28.477 18:35:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="53b817b5-fa7b-4eec-bd6a-2a2555b64e21") .free_clusters' 00:28:28.477 18:35:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=1278 00:28:28.477 18:35:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="53b817b5-fa7b-4eec-bd6a-2a2555b64e21") .cluster_size' 00:28:28.735 18:35:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:28:28.735 18:35:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=5112 00:28:28.735 5112 00:28:28.736 18:35:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 5112 00:28:28.736 18:35:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@77 -- # '[' 5112 -gt 20480 ']' 00:28:28.736 18:35:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 53b817b5-fa7b-4eec-bd6a-2a2555b64e21 lbd_0 5112 00:28:29.007 18:35:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # lb_guid=7573e1bb-9719-41f8-ba3f-901bf59ccf9d 00:28:29.008 18:35:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore 7573e1bb-9719-41f8-ba3f-901bf59ccf9d lvs_n_0 00:28:29.275 18:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=08de1491-dd08-4652-b14e-6c0791902bb4 00:28:29.275 18:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 08de1491-dd08-4652-b14e-6c0791902bb4 00:28:29.275 18:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=08de1491-dd08-4652-b14e-6c0791902bb4 00:28:29.275 18:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:28:29.275 18:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:28:29.275 18:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:28:29.275 18:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:29.533 18:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:28:29.533 { 00:28:29.533 "base_bdev": "Nvme0n1", 00:28:29.533 "block_size": 4096, 00:28:29.533 "cluster_size": 4194304, 00:28:29.533 "free_clusters": 0, 00:28:29.533 "name": "lvs_0", 00:28:29.533 "total_data_clusters": 1278, 00:28:29.533 "uuid": "53b817b5-fa7b-4eec-bd6a-2a2555b64e21" 00:28:29.533 }, 00:28:29.533 { 00:28:29.533 "base_bdev": "7573e1bb-9719-41f8-ba3f-901bf59ccf9d", 00:28:29.533 "block_size": 4096, 00:28:29.533 "cluster_size": 4194304, 00:28:29.533 "free_clusters": 1276, 00:28:29.533 "name": "lvs_n_0", 00:28:29.533 "total_data_clusters": 1276, 00:28:29.533 "uuid": "08de1491-dd08-4652-b14e-6c0791902bb4" 00:28:29.533 } 00:28:29.533 ]' 00:28:29.533 18:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="08de1491-dd08-4652-b14e-6c0791902bb4") .free_clusters' 00:28:29.533 18:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=1276 00:28:29.533 18:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="08de1491-dd08-4652-b14e-6c0791902bb4") .cluster_size' 00:28:29.533 5104 00:28:29.533 18:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:28:29.533 18:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=5104 00:28:29.533 18:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 5104 00:28:29.533 18:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@85 -- # '[' 5104 -gt 20480 ']' 00:28:29.533 18:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 08de1491-dd08-4652-b14e-6c0791902bb4 lbd_nest_0 5104 00:28:29.791 18:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=e8bd3a47-4b35-4ef6-a07e-bd0b2a748188 00:28:29.791 18:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:30.049 18:35:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:28:30.049 18:35:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 e8bd3a47-4b35-4ef6-a07e-bd0b2a748188 00:28:30.307 18:35:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:30.565 18:35:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:28:30.565 18:35:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:28:30.565 18:35:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:28:30.565 18:35:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:30.565 18:35:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:31.132 Initializing NVMe Controllers 00:28:31.132 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:31.132 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:28:31.132 WARNING: Some requested NVMe devices were skipped 00:28:31.132 No valid NVMe controllers or AIO or URING devices found 00:28:31.132 18:35:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:31.132 18:35:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:43.352 Initializing NVMe Controllers 00:28:43.352 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:43.352 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:43.352 Initialization complete. Launching workers. 00:28:43.352 ======================================================== 00:28:43.352 Latency(us) 00:28:43.352 Device Information : IOPS MiB/s Average min max 00:28:43.352 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 748.00 93.50 1335.77 481.74 7873.66 00:28:43.352 ======================================================== 00:28:43.352 Total : 748.00 93.50 1335.77 481.74 7873.66 00:28:43.352 00:28:43.352 18:35:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:28:43.352 18:35:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:43.352 18:35:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:43.352 Initializing NVMe Controllers 00:28:43.352 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:43.352 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:28:43.352 WARNING: Some requested NVMe devices were skipped 00:28:43.352 No valid NVMe controllers or AIO or URING devices found 00:28:43.352 18:35:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:43.352 18:35:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:53.315 Initializing NVMe Controllers 00:28:53.315 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:53.315 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:53.315 Initialization complete. Launching workers. 00:28:53.315 ======================================================== 00:28:53.315 Latency(us) 00:28:53.315 Device Information : IOPS MiB/s Average min max 00:28:53.315 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1137.48 142.19 28133.95 8079.16 239692.03 00:28:53.315 ======================================================== 00:28:53.315 Total : 1137.48 142.19 28133.95 8079.16 239692.03 00:28:53.315 00:28:53.315 18:36:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:28:53.315 18:36:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:53.315 18:36:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:53.315 Initializing NVMe Controllers 00:28:53.315 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:53.315 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:28:53.315 WARNING: Some requested NVMe devices were skipped 00:28:53.315 No valid NVMe controllers or AIO or URING devices found 00:28:53.315 18:36:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:53.315 18:36:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:03.318 Initializing NVMe Controllers 00:29:03.318 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:03.318 Controller IO queue size 128, less than required. 00:29:03.318 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:03.318 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:03.318 Initialization complete. Launching workers. 00:29:03.318 ======================================================== 00:29:03.318 Latency(us) 00:29:03.318 Device Information : IOPS MiB/s Average min max 00:29:03.318 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2921.22 365.15 43798.66 8717.91 123845.31 00:29:03.318 ======================================================== 00:29:03.318 Total : 2921.22 365.15 43798.66 8717.91 123845.31 00:29:03.318 00:29:03.318 18:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:03.576 18:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete e8bd3a47-4b35-4ef6-a07e-bd0b2a748188 00:29:03.835 18:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:29:04.093 18:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 7573e1bb-9719-41f8-ba3f-901bf59ccf9d 00:29:04.351 18:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:29:04.609 18:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:29:04.609 18:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:29:04.609 18:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:04.609 18:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:29:04.609 18:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:04.609 18:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:29:04.609 18:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:04.609 18:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:04.609 rmmod nvme_tcp 00:29:04.871 rmmod nvme_fabrics 00:29:04.871 rmmod nvme_keyring 00:29:04.871 18:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:04.871 18:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:29:04.871 18:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:29:04.871 18:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 97276 ']' 00:29:04.871 18:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 97276 00:29:04.871 18:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 97276 ']' 00:29:04.871 18:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 97276 00:29:04.871 18:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:29:04.871 18:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:04.871 18:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 97276 00:29:04.871 killing process with pid 97276 00:29:04.871 18:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:04.871 18:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:04.871 18:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 97276' 00:29:04.871 18:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@967 -- # kill 97276 00:29:04.871 18:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # wait 97276 00:29:06.793 18:36:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:06.793 18:36:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:06.793 18:36:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:06.793 18:36:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:06.793 18:36:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:06.793 18:36:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:06.793 18:36:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:06.793 18:36:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:06.793 18:36:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:29:06.793 00:29:06.793 real 0m53.910s 00:29:06.793 user 3m22.602s 00:29:06.793 sys 0m11.689s 00:29:06.793 18:36:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:06.793 ************************************ 00:29:06.793 END TEST nvmf_perf 00:29:06.793 18:36:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:06.793 ************************************ 00:29:06.793 18:36:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:29:06.793 18:36:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:29:06.793 18:36:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:29:06.793 18:36:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:06.793 18:36:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:06.793 ************************************ 00:29:06.793 START TEST nvmf_fio_host 00:29:06.793 ************************************ 00:29:06.793 18:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:29:06.793 * Looking for test storage... 00:29:06.793 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:29:06.793 18:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:06.793 18:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:06.793 18:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:06.793 18:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:06.793 18:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:06.793 18:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:06.793 18:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:06.793 18:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:29:06.793 18:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:06.793 18:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:29:06.793 18:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:29:06.793 18:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:06.793 18:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:06.793 18:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:06.793 18:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:06.793 18:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:06.793 18:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:06.793 18:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:06.793 18:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:06.793 18:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:06.793 18:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:06.793 18:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:29:06.793 18:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=0b8484e2-e129-4a11-8748-0b3c728771da 00:29:06.793 18:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:06.793 18:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:06.793 18:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:29:06.793 18:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:06.793 18:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:06.793 18:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:06.793 18:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:06.793 18:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:06.793 18:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:06.793 18:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:06.793 18:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:06.793 18:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:29:06.793 18:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:06.793 18:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:29:06.793 18:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:06.793 18:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:06.793 18:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:06.793 18:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:06.793 18:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:06.793 18:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:06.794 18:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:06.794 18:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:06.794 18:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:06.794 18:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:29:06.794 18:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:06.794 18:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:06.794 18:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:06.794 18:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:06.794 18:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:06.794 18:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:06.794 18:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:06.794 18:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:06.794 18:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:29:06.794 18:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:29:06.794 18:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:29:06.794 18:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:29:06.794 18:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:29:06.794 18:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # nvmf_veth_init 00:29:06.794 18:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:06.794 18:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:06.794 18:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:29:06.794 18:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:29:06.794 18:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:29:06.794 18:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:29:06.794 18:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:29:06.794 18:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:06.794 18:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:29:06.794 18:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:29:06.794 18:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:29:06.794 18:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:29:06.794 18:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:29:07.053 18:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:29:07.053 Cannot find device "nvmf_tgt_br" 00:29:07.053 18:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@155 -- # true 00:29:07.053 18:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:29:07.053 Cannot find device "nvmf_tgt_br2" 00:29:07.053 18:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # true 00:29:07.053 18:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:29:07.053 18:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:29:07.053 Cannot find device "nvmf_tgt_br" 00:29:07.053 18:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@158 -- # true 00:29:07.053 18:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:29:07.053 Cannot find device "nvmf_tgt_br2" 00:29:07.053 18:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # true 00:29:07.053 18:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:29:07.053 18:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:29:07.053 18:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:29:07.053 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:07.053 18:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:29:07.053 18:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:29:07.053 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:07.053 18:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:29:07.053 18:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:29:07.053 18:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:29:07.053 18:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:29:07.053 18:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:29:07.053 18:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:29:07.053 18:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:29:07.053 18:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:29:07.053 18:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:29:07.053 18:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:29:07.053 18:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:29:07.053 18:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:29:07.053 18:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:29:07.053 18:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:29:07.053 18:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:29:07.053 18:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:29:07.053 18:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:29:07.053 18:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:29:07.053 18:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:29:07.322 18:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:29:07.322 18:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:29:07.322 18:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:29:07.322 18:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:29:07.322 18:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:29:07.322 18:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:29:07.322 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:07.322 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:29:07.322 00:29:07.322 --- 10.0.0.2 ping statistics --- 00:29:07.322 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:07.322 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:29:07.322 18:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:29:07.322 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:29:07.322 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:29:07.322 00:29:07.322 --- 10.0.0.3 ping statistics --- 00:29:07.322 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:07.322 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:29:07.322 18:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:29:07.322 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:07.322 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.070 ms 00:29:07.322 00:29:07.322 --- 10.0.0.1 ping statistics --- 00:29:07.322 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:07.322 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:29:07.322 18:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:07.322 18:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@433 -- # return 0 00:29:07.322 18:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:07.322 18:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:07.322 18:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:07.322 18:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:07.322 18:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:07.322 18:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:07.322 18:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:07.322 18:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:29:07.322 18:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:29:07.322 18:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:07.322 18:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.322 18:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=98266 00:29:07.322 18:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:07.322 18:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:07.322 18:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 98266 00:29:07.322 18:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 98266 ']' 00:29:07.322 18:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:07.322 18:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:07.322 18:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:07.322 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:07.322 18:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:07.322 18:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.322 [2024-07-22 18:36:19.288271] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:29:07.322 [2024-07-22 18:36:19.288448] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:07.580 [2024-07-22 18:36:19.467480] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:07.838 [2024-07-22 18:36:19.806192] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:07.838 [2024-07-22 18:36:19.806271] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:07.838 [2024-07-22 18:36:19.806291] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:07.838 [2024-07-22 18:36:19.806306] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:07.838 [2024-07-22 18:36:19.806319] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:07.838 [2024-07-22 18:36:19.806948] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:07.839 [2024-07-22 18:36:19.807248] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:07.839 [2024-07-22 18:36:19.807459] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:07.839 [2024-07-22 18:36:19.807649] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:08.405 18:36:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:08.405 18:36:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:29:08.405 18:36:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:08.663 [2024-07-22 18:36:20.484057] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:08.663 18:36:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:29:08.663 18:36:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:08.663 18:36:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.663 18:36:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:29:08.922 Malloc1 00:29:08.922 18:36:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:09.180 18:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:29:09.438 18:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:09.696 [2024-07-22 18:36:21.672172] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:09.696 18:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:09.955 18:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:29:09.955 18:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:09.955 18:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:09.955 18:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:29:09.955 18:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:09.955 18:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:29:09.955 18:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:29:09.955 18:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:29:09.955 18:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:29:09.955 18:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:09.955 18:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:29:09.955 18:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:29:09.955 18:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:10.213 18:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:29:10.213 18:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:29:10.213 18:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # break 00:29:10.213 18:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:29:10.213 18:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:10.213 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:29:10.213 fio-3.35 00:29:10.213 Starting 1 thread 00:29:12.743 00:29:12.743 test: (groupid=0, jobs=1): err= 0: pid=98385: Mon Jul 22 18:36:24 2024 00:29:12.743 read: IOPS=6000, BW=23.4MiB/s (24.6MB/s)(47.1MiB/2011msec) 00:29:12.743 slat (usec): min=2, max=418, avg= 3.48, stdev= 4.48 00:29:12.743 clat (usec): min=3872, max=20948, avg=11166.48, stdev=1660.79 00:29:12.743 lat (usec): min=3910, max=20951, avg=11169.96, stdev=1660.58 00:29:12.743 clat percentiles (usec): 00:29:12.743 | 1.00th=[ 8291], 5.00th=[ 9503], 10.00th=[ 9765], 20.00th=[10159], 00:29:12.743 | 30.00th=[10421], 40.00th=[10683], 50.00th=[10814], 60.00th=[11076], 00:29:12.743 | 70.00th=[11338], 80.00th=[11731], 90.00th=[12649], 95.00th=[15139], 00:29:12.743 | 99.00th=[17433], 99.50th=[17695], 99.90th=[19530], 99.95th=[19792], 00:29:12.743 | 99.99th=[20841] 00:29:12.743 bw ( KiB/s): min=22528, max=25472, per=100.00%, avg=24006.00, stdev=1202.44, samples=4 00:29:12.743 iops : min= 5632, max= 6368, avg=6001.50, stdev=300.61, samples=4 00:29:12.743 write: IOPS=5988, BW=23.4MiB/s (24.5MB/s)(47.0MiB/2011msec); 0 zone resets 00:29:12.743 slat (usec): min=2, max=152, avg= 3.60, stdev= 2.42 00:29:12.743 clat (usec): min=2131, max=18498, avg=10045.41, stdev=1495.51 00:29:12.743 lat (usec): min=2149, max=18501, avg=10049.01, stdev=1495.41 00:29:12.743 clat percentiles (usec): 00:29:12.743 | 1.00th=[ 7439], 5.00th=[ 8586], 10.00th=[ 8848], 20.00th=[ 9110], 00:29:12.743 | 30.00th=[ 9372], 40.00th=[ 9634], 50.00th=[ 9765], 60.00th=[ 9896], 00:29:12.743 | 70.00th=[10159], 80.00th=[10552], 90.00th=[11338], 95.00th=[13566], 00:29:12.743 | 99.00th=[15664], 99.50th=[16057], 99.90th=[17171], 99.95th=[17957], 00:29:12.743 | 99.99th=[18482] 00:29:12.743 bw ( KiB/s): min=22336, max=24984, per=100.00%, avg=23958.00, stdev=1249.05, samples=4 00:29:12.743 iops : min= 5584, max= 6246, avg=5989.50, stdev=312.26, samples=4 00:29:12.743 lat (msec) : 4=0.10%, 10=39.13%, 20=60.75%, 50=0.01% 00:29:12.743 cpu : usr=70.50%, sys=21.54%, ctx=11, majf=0, minf=1539 00:29:12.743 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:29:12.743 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:12.743 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:12.744 issued rwts: total=12067,12042,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:12.744 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:12.744 00:29:12.744 Run status group 0 (all jobs): 00:29:12.744 READ: bw=23.4MiB/s (24.6MB/s), 23.4MiB/s-23.4MiB/s (24.6MB/s-24.6MB/s), io=47.1MiB (49.4MB), run=2011-2011msec 00:29:12.744 WRITE: bw=23.4MiB/s (24.5MB/s), 23.4MiB/s-23.4MiB/s (24.5MB/s-24.5MB/s), io=47.0MiB (49.3MB), run=2011-2011msec 00:29:12.744 ----------------------------------------------------- 00:29:12.744 Suppressions used: 00:29:12.744 count bytes template 00:29:12.744 1 57 /usr/src/fio/parse.c 00:29:12.744 1 8 libtcmalloc_minimal.so 00:29:12.744 ----------------------------------------------------- 00:29:12.744 00:29:13.000 18:36:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:29:13.000 18:36:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:29:13.000 18:36:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:29:13.000 18:36:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:13.000 18:36:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:29:13.000 18:36:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:29:13.000 18:36:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:29:13.000 18:36:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:29:13.000 18:36:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:13.000 18:36:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:13.000 18:36:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:29:13.000 18:36:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:29:13.000 18:36:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:29:13.000 18:36:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:29:13.000 18:36:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # break 00:29:13.000 18:36:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:29:13.000 18:36:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:29:13.000 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:29:13.000 fio-3.35 00:29:13.000 Starting 1 thread 00:29:15.525 00:29:15.525 test: (groupid=0, jobs=1): err= 0: pid=98427: Mon Jul 22 18:36:27 2024 00:29:15.525 read: IOPS=5476, BW=85.6MiB/s (89.7MB/s)(172MiB/2010msec) 00:29:15.525 slat (usec): min=3, max=244, avg= 5.20, stdev= 3.35 00:29:15.525 clat (usec): min=3724, max=31196, avg=13528.69, stdev=3377.15 00:29:15.525 lat (usec): min=3728, max=31201, avg=13533.89, stdev=3377.21 00:29:15.525 clat percentiles (usec): 00:29:15.525 | 1.00th=[ 6980], 5.00th=[ 8356], 10.00th=[ 9372], 20.00th=[10683], 00:29:15.525 | 30.00th=[11469], 40.00th=[12256], 50.00th=[13173], 60.00th=[14222], 00:29:15.525 | 70.00th=[15008], 80.00th=[16450], 90.00th=[18220], 95.00th=[19530], 00:29:15.525 | 99.00th=[21627], 99.50th=[22414], 99.90th=[23200], 99.95th=[25297], 00:29:15.525 | 99.99th=[27132] 00:29:15.525 bw ( KiB/s): min=39968, max=55281, per=52.67%, avg=46148.25, stdev=7256.29, samples=4 00:29:15.525 iops : min= 2498, max= 3455, avg=2884.25, stdev=453.49, samples=4 00:29:15.525 write: IOPS=3192, BW=49.9MiB/s (52.3MB/s)(94.0MiB/1884msec); 0 zone resets 00:29:15.525 slat (usec): min=34, max=203, avg=41.77, stdev= 8.53 00:29:15.525 clat (usec): min=7632, max=31266, avg=17250.99, stdev=3568.08 00:29:15.525 lat (usec): min=7672, max=31306, avg=17292.76, stdev=3567.72 00:29:15.525 clat percentiles (usec): 00:29:15.525 | 1.00th=[11207], 5.00th=[12256], 10.00th=[13173], 20.00th=[14222], 00:29:15.525 | 30.00th=[14877], 40.00th=[15795], 50.00th=[16909], 60.00th=[17957], 00:29:15.525 | 70.00th=[18744], 80.00th=[19792], 90.00th=[22676], 95.00th=[23725], 00:29:15.525 | 99.00th=[27395], 99.50th=[28181], 99.90th=[28443], 99.95th=[30016], 00:29:15.525 | 99.99th=[31327] 00:29:15.525 bw ( KiB/s): min=41952, max=56111, per=93.52%, avg=47771.75, stdev=6623.02, samples=4 00:29:15.525 iops : min= 2622, max= 3506, avg=2985.50, stdev=413.55, samples=4 00:29:15.525 lat (msec) : 4=0.02%, 10=8.97%, 20=81.59%, 50=9.41% 00:29:15.525 cpu : usr=77.90%, sys=15.83%, ctx=7, majf=0, minf=1977 00:29:15.525 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:29:15.525 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:15.525 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:15.525 issued rwts: total=11007,6015,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:15.525 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:15.525 00:29:15.525 Run status group 0 (all jobs): 00:29:15.525 READ: bw=85.6MiB/s (89.7MB/s), 85.6MiB/s-85.6MiB/s (89.7MB/s-89.7MB/s), io=172MiB (180MB), run=2010-2010msec 00:29:15.525 WRITE: bw=49.9MiB/s (52.3MB/s), 49.9MiB/s-49.9MiB/s (52.3MB/s-52.3MB/s), io=94.0MiB (98.5MB), run=1884-1884msec 00:29:15.782 ----------------------------------------------------- 00:29:15.782 Suppressions used: 00:29:15.782 count bytes template 00:29:15.782 1 57 /usr/src/fio/parse.c 00:29:15.782 803 77088 /usr/src/fio/iolog.c 00:29:15.782 1 8 libtcmalloc_minimal.so 00:29:15.782 ----------------------------------------------------- 00:29:15.782 00:29:15.782 18:36:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:16.040 18:36:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:29:16.040 18:36:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:29:16.040 18:36:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:29:16.040 18:36:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1513 -- # bdfs=() 00:29:16.040 18:36:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1513 -- # local bdfs 00:29:16.040 18:36:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:29:16.040 18:36:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:29:16.040 18:36:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:29:16.040 18:36:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:29:16.040 18:36:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:29:16.040 18:36:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 -i 10.0.0.2 00:29:16.298 Nvme0n1 00:29:16.298 18:36:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:29:16.558 18:36:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=20e31dd4-c9ea-4a3d-853d-355b46fe186d 00:29:16.558 18:36:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb 20e31dd4-c9ea-4a3d-853d-355b46fe186d 00:29:16.558 18:36:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=20e31dd4-c9ea-4a3d-853d-355b46fe186d 00:29:16.558 18:36:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:29:16.558 18:36:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:29:16.559 18:36:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:29:16.559 18:36:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:16.816 18:36:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:29:16.816 { 00:29:16.816 "base_bdev": "Nvme0n1", 00:29:16.816 "block_size": 4096, 00:29:16.816 "cluster_size": 1073741824, 00:29:16.816 "free_clusters": 4, 00:29:16.816 "name": "lvs_0", 00:29:16.816 "total_data_clusters": 4, 00:29:16.816 "uuid": "20e31dd4-c9ea-4a3d-853d-355b46fe186d" 00:29:16.816 } 00:29:16.816 ]' 00:29:16.816 18:36:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="20e31dd4-c9ea-4a3d-853d-355b46fe186d") .free_clusters' 00:29:16.816 18:36:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=4 00:29:16.816 18:36:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="20e31dd4-c9ea-4a3d-853d-355b46fe186d") .cluster_size' 00:29:16.816 18:36:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=1073741824 00:29:16.816 18:36:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=4096 00:29:16.816 4096 00:29:16.816 18:36:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 4096 00:29:16.816 18:36:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 4096 00:29:17.074 53cb3d1d-542b-492f-af99-96872d6a81a9 00:29:17.074 18:36:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:29:17.331 18:36:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:29:17.588 18:36:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:29:17.844 18:36:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:17.844 18:36:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:17.844 18:36:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:29:17.844 18:36:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:17.844 18:36:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:29:17.844 18:36:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:29:17.844 18:36:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:29:17.844 18:36:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:29:17.844 18:36:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:17.844 18:36:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:29:17.844 18:36:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:29:17.844 18:36:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:18.102 18:36:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:29:18.102 18:36:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:29:18.102 18:36:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # break 00:29:18.102 18:36:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:29:18.102 18:36:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:18.102 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:29:18.102 fio-3.35 00:29:18.102 Starting 1 thread 00:29:20.631 00:29:20.631 test: (groupid=0, jobs=1): err= 0: pid=98576: Mon Jul 22 18:36:32 2024 00:29:20.631 read: IOPS=4633, BW=18.1MiB/s (19.0MB/s)(37.2MiB/2054msec) 00:29:20.631 slat (usec): min=2, max=440, avg= 3.71, stdev= 5.81 00:29:20.631 clat (usec): min=5762, max=65365, avg=14485.61, stdev=3767.07 00:29:20.631 lat (usec): min=5773, max=65369, avg=14489.31, stdev=3766.97 00:29:20.631 clat percentiles (usec): 00:29:20.631 | 1.00th=[11469], 5.00th=[12256], 10.00th=[12649], 20.00th=[13173], 00:29:20.631 | 30.00th=[13566], 40.00th=[13829], 50.00th=[14222], 60.00th=[14484], 00:29:20.631 | 70.00th=[14746], 80.00th=[15270], 90.00th=[15926], 95.00th=[16450], 00:29:20.631 | 99.00th=[17957], 99.50th=[53740], 99.90th=[63177], 99.95th=[64750], 00:29:20.631 | 99.99th=[65274] 00:29:20.631 bw ( KiB/s): min=17800, max=19456, per=100.00%, avg=18918.00, stdev=774.79, samples=4 00:29:20.631 iops : min= 4450, max= 4864, avg=4729.50, stdev=193.70, samples=4 00:29:20.631 write: IOPS=4630, BW=18.1MiB/s (19.0MB/s)(37.2MiB/2054msec); 0 zone resets 00:29:20.631 slat (usec): min=2, max=185, avg= 3.83, stdev= 2.46 00:29:20.631 clat (usec): min=2784, max=65453, avg=12965.25, stdev=4128.91 00:29:20.631 lat (usec): min=2798, max=65457, avg=12969.07, stdev=4128.87 00:29:20.631 clat percentiles (usec): 00:29:20.631 | 1.00th=[10028], 5.00th=[10945], 10.00th=[11338], 20.00th=[11731], 00:29:20.631 | 30.00th=[12125], 40.00th=[12387], 50.00th=[12649], 60.00th=[12911], 00:29:20.631 | 70.00th=[13173], 80.00th=[13566], 90.00th=[14091], 95.00th=[14484], 00:29:20.631 | 99.00th=[16188], 99.50th=[56886], 99.90th=[64750], 99.95th=[65274], 00:29:20.631 | 99.99th=[65274] 00:29:20.631 bw ( KiB/s): min=18464, max=19176, per=100.00%, avg=18888.00, stdev=335.05, samples=4 00:29:20.631 iops : min= 4616, max= 4794, avg=4722.00, stdev=83.76, samples=4 00:29:20.631 lat (msec) : 4=0.02%, 10=0.60%, 20=98.71%, 50=0.01%, 100=0.67% 00:29:20.631 cpu : usr=73.02%, sys=20.31%, ctx=6, majf=0, minf=1539 00:29:20.631 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:29:20.631 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:20.632 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:20.632 issued rwts: total=9518,9512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:20.632 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:20.632 00:29:20.632 Run status group 0 (all jobs): 00:29:20.632 READ: bw=18.1MiB/s (19.0MB/s), 18.1MiB/s-18.1MiB/s (19.0MB/s-19.0MB/s), io=37.2MiB (39.0MB), run=2054-2054msec 00:29:20.632 WRITE: bw=18.1MiB/s (19.0MB/s), 18.1MiB/s-18.1MiB/s (19.0MB/s-19.0MB/s), io=37.2MiB (39.0MB), run=2054-2054msec 00:29:20.890 ----------------------------------------------------- 00:29:20.890 Suppressions used: 00:29:20.890 count bytes template 00:29:20.890 1 58 /usr/src/fio/parse.c 00:29:20.890 1 8 libtcmalloc_minimal.so 00:29:20.890 ----------------------------------------------------- 00:29:20.890 00:29:20.890 18:36:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:29:21.150 18:36:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:29:21.409 18:36:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=60949cc2-9c04-464a-8a08-baaaf4170a7b 00:29:21.409 18:36:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb 60949cc2-9c04-464a-8a08-baaaf4170a7b 00:29:21.409 18:36:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=60949cc2-9c04-464a-8a08-baaaf4170a7b 00:29:21.409 18:36:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:29:21.409 18:36:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:29:21.409 18:36:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:29:21.409 18:36:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:21.666 18:36:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:29:21.666 { 00:29:21.666 "base_bdev": "Nvme0n1", 00:29:21.666 "block_size": 4096, 00:29:21.666 "cluster_size": 1073741824, 00:29:21.666 "free_clusters": 0, 00:29:21.666 "name": "lvs_0", 00:29:21.666 "total_data_clusters": 4, 00:29:21.666 "uuid": "20e31dd4-c9ea-4a3d-853d-355b46fe186d" 00:29:21.666 }, 00:29:21.666 { 00:29:21.666 "base_bdev": "53cb3d1d-542b-492f-af99-96872d6a81a9", 00:29:21.666 "block_size": 4096, 00:29:21.666 "cluster_size": 4194304, 00:29:21.666 "free_clusters": 1022, 00:29:21.666 "name": "lvs_n_0", 00:29:21.666 "total_data_clusters": 1022, 00:29:21.666 "uuid": "60949cc2-9c04-464a-8a08-baaaf4170a7b" 00:29:21.666 } 00:29:21.666 ]' 00:29:21.666 18:36:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="60949cc2-9c04-464a-8a08-baaaf4170a7b") .free_clusters' 00:29:21.666 18:36:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=1022 00:29:21.666 18:36:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="60949cc2-9c04-464a-8a08-baaaf4170a7b") .cluster_size' 00:29:21.666 18:36:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=4194304 00:29:21.666 18:36:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=4088 00:29:21.666 4088 00:29:21.667 18:36:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 4088 00:29:21.667 18:36:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 4088 00:29:21.924 e1a1cadf-cd1a-4e60-aa07-4c9ddc4def5a 00:29:21.924 18:36:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:29:22.182 18:36:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:29:22.440 18:36:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:29:22.699 18:36:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:22.699 18:36:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:22.699 18:36:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:29:22.699 18:36:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:22.699 18:36:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:29:22.699 18:36:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:29:22.699 18:36:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:29:22.699 18:36:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:29:22.699 18:36:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:22.699 18:36:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:29:22.699 18:36:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:29:22.699 18:36:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:22.699 18:36:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:29:22.699 18:36:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:29:22.699 18:36:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # break 00:29:22.699 18:36:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:29:22.699 18:36:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:22.958 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:29:22.958 fio-3.35 00:29:22.958 Starting 1 thread 00:29:25.528 00:29:25.528 test: (groupid=0, jobs=1): err= 0: pid=98691: Mon Jul 22 18:36:37 2024 00:29:25.528 read: IOPS=4123, BW=16.1MiB/s (16.9MB/s)(32.4MiB/2012msec) 00:29:25.528 slat (usec): min=2, max=332, avg= 3.88, stdev= 4.80 00:29:25.529 clat (usec): min=6514, max=28585, avg=16311.94, stdev=1603.94 00:29:25.529 lat (usec): min=6523, max=28589, avg=16315.81, stdev=1603.72 00:29:25.529 clat percentiles (usec): 00:29:25.529 | 1.00th=[13042], 5.00th=[13960], 10.00th=[14484], 20.00th=[15008], 00:29:25.529 | 30.00th=[15533], 40.00th=[15926], 50.00th=[16188], 60.00th=[16581], 00:29:25.529 | 70.00th=[17171], 80.00th=[17433], 90.00th=[18220], 95.00th=[19006], 00:29:25.529 | 99.00th=[20317], 99.50th=[21103], 99.90th=[26084], 99.95th=[26346], 00:29:25.529 | 99.99th=[28705] 00:29:25.529 bw ( KiB/s): min=15616, max=17408, per=99.61%, avg=16428.00, stdev=754.45, samples=4 00:29:25.529 iops : min= 3904, max= 4352, avg=4107.00, stdev=188.61, samples=4 00:29:25.529 write: IOPS=4137, BW=16.2MiB/s (16.9MB/s)(32.5MiB/2012msec); 0 zone resets 00:29:25.529 slat (usec): min=3, max=216, avg= 4.06, stdev= 2.98 00:29:25.529 clat (usec): min=3092, max=27981, avg=14494.38, stdev=1436.03 00:29:25.529 lat (usec): min=3104, max=27984, avg=14498.44, stdev=1435.85 00:29:25.529 clat percentiles (usec): 00:29:25.529 | 1.00th=[11469], 5.00th=[12518], 10.00th=[12911], 20.00th=[13435], 00:29:25.529 | 30.00th=[13829], 40.00th=[14091], 50.00th=[14484], 60.00th=[14746], 00:29:25.529 | 70.00th=[15139], 80.00th=[15533], 90.00th=[16188], 95.00th=[16712], 00:29:25.529 | 99.00th=[17695], 99.50th=[18220], 99.90th=[24249], 99.95th=[24511], 00:29:25.529 | 99.99th=[27919] 00:29:25.529 bw ( KiB/s): min=16080, max=17112, per=99.95%, avg=16541.75, stdev=426.41, samples=4 00:29:25.529 iops : min= 4020, max= 4278, avg=4135.25, stdev=106.65, samples=4 00:29:25.529 lat (msec) : 4=0.02%, 10=0.31%, 20=98.83%, 50=0.84% 00:29:25.529 cpu : usr=72.75%, sys=20.93%, ctx=17, majf=0, minf=1539 00:29:25.529 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:29:25.529 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:25.529 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:25.529 issued rwts: total=8296,8324,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:25.529 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:25.529 00:29:25.529 Run status group 0 (all jobs): 00:29:25.529 READ: bw=16.1MiB/s (16.9MB/s), 16.1MiB/s-16.1MiB/s (16.9MB/s-16.9MB/s), io=32.4MiB (34.0MB), run=2012-2012msec 00:29:25.529 WRITE: bw=16.2MiB/s (16.9MB/s), 16.2MiB/s-16.2MiB/s (16.9MB/s-16.9MB/s), io=32.5MiB (34.1MB), run=2012-2012msec 00:29:25.529 ----------------------------------------------------- 00:29:25.529 Suppressions used: 00:29:25.529 count bytes template 00:29:25.529 1 58 /usr/src/fio/parse.c 00:29:25.529 1 8 libtcmalloc_minimal.so 00:29:25.529 ----------------------------------------------------- 00:29:25.529 00:29:25.529 18:36:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:29:25.787 18:36:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:29:25.787 18:36:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs_n_0/lbd_nest_0 00:29:26.045 18:36:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:29:26.303 18:36:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:29:26.596 18:36:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:29:26.875 18:36:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:29:27.809 18:36:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:29:27.809 18:36:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:29:27.809 18:36:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:29:27.809 18:36:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:27.809 18:36:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:29:27.809 18:36:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:27.809 18:36:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:29:27.809 18:36:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:27.809 18:36:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:27.809 rmmod nvme_tcp 00:29:27.809 rmmod nvme_fabrics 00:29:27.809 rmmod nvme_keyring 00:29:27.809 18:36:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:27.809 18:36:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:29:27.809 18:36:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:29:27.809 18:36:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 98266 ']' 00:29:27.809 18:36:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 98266 00:29:27.809 18:36:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 98266 ']' 00:29:27.809 18:36:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 98266 00:29:27.809 18:36:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:29:27.809 18:36:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:27.809 18:36:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 98266 00:29:27.809 killing process with pid 98266 00:29:27.809 18:36:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:27.809 18:36:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:27.809 18:36:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 98266' 00:29:27.809 18:36:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 98266 00:29:27.809 18:36:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 98266 00:29:29.713 18:36:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:29.713 18:36:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:29.713 18:36:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:29.713 18:36:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:29.713 18:36:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:29.713 18:36:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:29.713 18:36:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:29.713 18:36:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:29.713 18:36:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:29:29.713 00:29:29.713 real 0m22.605s 00:29:29.713 user 1m36.403s 00:29:29.713 sys 0m4.877s 00:29:29.713 18:36:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:29.713 18:36:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.713 ************************************ 00:29:29.713 END TEST nvmf_fio_host 00:29:29.713 ************************************ 00:29:29.713 18:36:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:29:29.713 18:36:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:29:29.713 18:36:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:29:29.713 18:36:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:29.713 18:36:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.713 ************************************ 00:29:29.713 START TEST nvmf_failover 00:29:29.713 ************************************ 00:29:29.713 18:36:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:29:29.713 * Looking for test storage... 00:29:29.713 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:29:29.713 18:36:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:29:29.713 18:36:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:29:29.713 18:36:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:29.713 18:36:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:29.713 18:36:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:29.713 18:36:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:29.713 18:36:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:29.713 18:36:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:29.713 18:36:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:29.713 18:36:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:29.713 18:36:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:29.713 18:36:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:29.713 18:36:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:29:29.713 18:36:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=0b8484e2-e129-4a11-8748-0b3c728771da 00:29:29.713 18:36:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:29.713 18:36:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:29.713 18:36:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:29:29.713 18:36:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:29.713 18:36:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:29.713 18:36:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:29.713 18:36:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:29.713 18:36:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:29.713 18:36:41 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.713 18:36:41 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.713 18:36:41 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.713 18:36:41 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:29:29.713 18:36:41 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.713 18:36:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:29:29.713 18:36:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:29.713 18:36:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:29.713 18:36:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:29.713 18:36:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:29.713 18:36:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:29.713 18:36:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:29.713 18:36:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:29.713 18:36:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:29.713 18:36:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:29.713 18:36:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:29.713 18:36:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:29.713 18:36:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:29.713 18:36:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:29:29.713 18:36:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:29.713 18:36:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:29.713 18:36:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:29.713 18:36:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:29.713 18:36:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:29.714 18:36:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:29.714 18:36:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:29.714 18:36:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:29.714 18:36:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:29:29.714 18:36:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:29:29.714 18:36:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:29:29.714 18:36:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:29:29.714 18:36:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:29:29.714 18:36:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # nvmf_veth_init 00:29:29.714 18:36:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:29.714 18:36:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:29.714 18:36:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:29:29.714 18:36:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:29:29.714 18:36:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:29:29.714 18:36:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:29:29.714 18:36:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:29:29.714 18:36:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:29.714 18:36:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:29:29.714 18:36:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:29:29.714 18:36:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:29:29.714 18:36:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:29:29.714 18:36:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:29:29.714 18:36:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:29:29.714 Cannot find device "nvmf_tgt_br" 00:29:29.714 18:36:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@155 -- # true 00:29:29.714 18:36:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:29:29.714 Cannot find device "nvmf_tgt_br2" 00:29:29.714 18:36:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # true 00:29:29.714 18:36:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:29:29.714 18:36:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:29:29.714 Cannot find device "nvmf_tgt_br" 00:29:29.714 18:36:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@158 -- # true 00:29:29.714 18:36:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:29:29.714 Cannot find device "nvmf_tgt_br2" 00:29:29.714 18:36:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # true 00:29:29.714 18:36:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:29:29.714 18:36:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:29:29.714 18:36:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:29:29.714 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:29.714 18:36:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # true 00:29:29.714 18:36:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:29:29.714 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:29.714 18:36:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # true 00:29:29.714 18:36:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:29:29.714 18:36:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:29:29.714 18:36:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:29:29.714 18:36:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:29:29.714 18:36:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:29:29.714 18:36:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:29:29.714 18:36:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:29:29.714 18:36:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:29:29.714 18:36:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:29:29.714 18:36:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:29:29.714 18:36:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:29:29.714 18:36:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:29:29.714 18:36:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:29:29.714 18:36:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:29:29.973 18:36:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:29:29.973 18:36:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:29:29.973 18:36:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:29:29.973 18:36:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:29:29.973 18:36:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:29:29.973 18:36:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:29:29.973 18:36:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:29:29.973 18:36:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:29:29.973 18:36:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:29:29.973 18:36:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:29:29.973 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:29.973 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.091 ms 00:29:29.973 00:29:29.973 --- 10.0.0.2 ping statistics --- 00:29:29.973 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:29.973 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:29:29.973 18:36:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:29:29.973 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:29:29.973 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:29:29.973 00:29:29.973 --- 10.0.0.3 ping statistics --- 00:29:29.973 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:29.973 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:29:29.973 18:36:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:29:29.973 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:29.973 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.044 ms 00:29:29.973 00:29:29.973 --- 10.0.0.1 ping statistics --- 00:29:29.973 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:29.973 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:29:29.973 18:36:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:29.973 18:36:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@433 -- # return 0 00:29:29.973 18:36:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:29.973 18:36:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:29.973 18:36:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:29.973 18:36:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:29.973 18:36:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:29.973 18:36:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:29.973 18:36:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:29.973 18:36:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:29:29.973 18:36:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:29.973 18:36:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:29.973 18:36:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:29.973 18:36:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=98978 00:29:29.973 18:36:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:29.973 18:36:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 98978 00:29:29.973 18:36:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 98978 ']' 00:29:29.973 18:36:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:29.973 18:36:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:29.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:29.973 18:36:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:29.973 18:36:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:29.973 18:36:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:29.973 [2024-07-22 18:36:41.989554] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:29:29.973 [2024-07-22 18:36:41.989758] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:30.232 [2024-07-22 18:36:42.171393] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:30.490 [2024-07-22 18:36:42.462747] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:30.490 [2024-07-22 18:36:42.462816] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:30.490 [2024-07-22 18:36:42.462850] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:30.490 [2024-07-22 18:36:42.462869] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:30.490 [2024-07-22 18:36:42.462881] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:30.490 [2024-07-22 18:36:42.463057] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:30.490 [2024-07-22 18:36:42.463205] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:30.490 [2024-07-22 18:36:42.463219] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:31.078 18:36:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:31.078 18:36:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:29:31.078 18:36:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:31.078 18:36:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:31.078 18:36:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:31.078 18:36:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:31.078 18:36:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:31.338 [2024-07-22 18:36:43.316379] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:31.338 18:36:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:29:31.903 Malloc0 00:29:31.903 18:36:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:32.162 18:36:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:32.420 18:36:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:32.678 [2024-07-22 18:36:44.579462] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:32.678 18:36:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:32.936 [2024-07-22 18:36:44.835921] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:32.936 18:36:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:29:33.194 [2024-07-22 18:36:45.148393] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:29:33.194 18:36:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=99096 00:29:33.194 18:36:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:33.194 18:36:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:29:33.194 18:36:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 99096 /var/tmp/bdevperf.sock 00:29:33.194 18:36:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 99096 ']' 00:29:33.194 18:36:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:33.194 18:36:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:33.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:33.194 18:36:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:33.194 18:36:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:33.194 18:36:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:34.568 18:36:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:34.568 18:36:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:29:34.568 18:36:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:34.826 NVMe0n1 00:29:34.826 18:36:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:35.084 00:29:35.084 18:36:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=99142 00:29:35.084 18:36:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:35.085 18:36:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:29:36.457 18:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:36.457 [2024-07-22 18:36:48.365826] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:29:36.458 [2024-07-22 18:36:48.365946] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:29:36.458 [2024-07-22 18:36:48.365967] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:29:36.458 [2024-07-22 18:36:48.365984] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:29:36.458 [2024-07-22 18:36:48.366001] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:29:36.458 [2024-07-22 18:36:48.366018] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:29:36.458 [2024-07-22 18:36:48.366044] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:29:36.458 [2024-07-22 18:36:48.366063] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:29:36.458 [2024-07-22 18:36:48.366080] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:29:36.458 [2024-07-22 18:36:48.366097] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:29:36.458 [2024-07-22 18:36:48.366114] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:29:36.458 [2024-07-22 18:36:48.366131] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:29:36.458 [2024-07-22 18:36:48.366147] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:29:36.458 [2024-07-22 18:36:48.366163] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:29:36.458 [2024-07-22 18:36:48.366180] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:29:36.458 [2024-07-22 18:36:48.366196] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:29:36.458 [2024-07-22 18:36:48.366213] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:29:36.458 [2024-07-22 18:36:48.366229] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:29:36.458 [2024-07-22 18:36:48.366246] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:29:36.458 [2024-07-22 18:36:48.366262] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:29:36.458 [2024-07-22 18:36:48.366279] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:29:36.458 [2024-07-22 18:36:48.366783] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:29:36.458 [2024-07-22 18:36:48.366811] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:29:36.458 [2024-07-22 18:36:48.366943] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:29:36.458 [2024-07-22 18:36:48.366964] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:29:36.458 [2024-07-22 18:36:48.366980] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:29:36.458 [2024-07-22 18:36:48.366998] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:29:36.458 [2024-07-22 18:36:48.367015] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:29:36.458 [2024-07-22 18:36:48.367032] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:29:36.458 [2024-07-22 18:36:48.367048] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:29:36.458 [2024-07-22 18:36:48.367075] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:29:36.458 [2024-07-22 18:36:48.367095] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:29:36.458 18:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:29:39.736 18:36:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:39.994 00:29:39.994 18:36:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:40.254 [2024-07-22 18:36:52.129092] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:29:40.254 [2024-07-22 18:36:52.129219] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:29:40.254 [2024-07-22 18:36:52.129248] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:29:40.254 [2024-07-22 18:36:52.129271] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:29:40.254 [2024-07-22 18:36:52.129294] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:29:40.254 [2024-07-22 18:36:52.129318] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:29:40.254 [2024-07-22 18:36:52.129340] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:29:40.254 [2024-07-22 18:36:52.129364] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:29:40.254 [2024-07-22 18:36:52.129386] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:29:40.254 [2024-07-22 18:36:52.129407] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:29:40.254 [2024-07-22 18:36:52.129429] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:29:40.254 [2024-07-22 18:36:52.129450] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:29:40.254 [2024-07-22 18:36:52.129472] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:29:40.254 [2024-07-22 18:36:52.129494] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:29:40.254 [2024-07-22 18:36:52.129515] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:29:40.254 [2024-07-22 18:36:52.129537] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:29:40.254 [2024-07-22 18:36:52.129556] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:29:40.254 [2024-07-22 18:36:52.129576] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:29:40.254 [2024-07-22 18:36:52.129599] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:29:40.254 [2024-07-22 18:36:52.129620] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:29:40.254 [2024-07-22 18:36:52.129640] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:29:40.254 [2024-07-22 18:36:52.129659] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:29:40.254 [2024-07-22 18:36:52.129681] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:29:40.254 [2024-07-22 18:36:52.129703] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:29:40.254 [2024-07-22 18:36:52.129724] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:29:40.254 [2024-07-22 18:36:52.129745] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:29:40.254 [2024-07-22 18:36:52.129765] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:29:40.254 [2024-07-22 18:36:52.129789] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:29:40.254 [2024-07-22 18:36:52.129811] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:29:40.254 [2024-07-22 18:36:52.129852] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:29:40.254 [2024-07-22 18:36:52.129896] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:29:40.254 [2024-07-22 18:36:52.129920] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:29:40.254 [2024-07-22 18:36:52.129941] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:29:40.254 [2024-07-22 18:36:52.129963] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:29:40.254 [2024-07-22 18:36:52.129987] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:29:40.254 [2024-07-22 18:36:52.130009] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:29:40.254 [2024-07-22 18:36:52.130031] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:29:40.254 [2024-07-22 18:36:52.130071] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:29:40.254 [2024-07-22 18:36:52.130093] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:29:40.254 [2024-07-22 18:36:52.130116] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:29:40.254 [2024-07-22 18:36:52.130139] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:29:40.254 [2024-07-22 18:36:52.130162] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:29:40.254 [2024-07-22 18:36:52.130184] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:29:40.254 [2024-07-22 18:36:52.130205] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:29:40.254 [2024-07-22 18:36:52.130230] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:29:40.254 [2024-07-22 18:36:52.130252] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:29:40.254 [2024-07-22 18:36:52.130274] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:29:40.254 [2024-07-22 18:36:52.130295] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:29:40.254 [2024-07-22 18:36:52.130397] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:29:40.254 [2024-07-22 18:36:52.130420] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:29:40.254 [2024-07-22 18:36:52.130441] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:29:40.254 [2024-07-22 18:36:52.130461] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:29:40.254 [2024-07-22 18:36:52.130481] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:29:40.254 [2024-07-22 18:36:52.130503] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:29:40.254 [2024-07-22 18:36:52.130523] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:29:40.254 [2024-07-22 18:36:52.130545] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:29:40.254 [2024-07-22 18:36:52.130564] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:29:40.254 [2024-07-22 18:36:52.130584] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:29:40.254 [2024-07-22 18:36:52.130604] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:29:40.254 [2024-07-22 18:36:52.130624] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:29:40.254 [2024-07-22 18:36:52.130644] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:29:40.254 [2024-07-22 18:36:52.130664] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:29:40.254 [2024-07-22 18:36:52.130685] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:29:40.254 [2024-07-22 18:36:52.130707] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:29:40.254 [2024-07-22 18:36:52.130731] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:29:40.254 [2024-07-22 18:36:52.130753] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:29:40.255 [2024-07-22 18:36:52.130774] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:29:40.255 [2024-07-22 18:36:52.130796] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:29:40.255 [2024-07-22 18:36:52.130818] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:29:40.255 [2024-07-22 18:36:52.130863] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:29:40.255 [2024-07-22 18:36:52.130886] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:29:40.255 [2024-07-22 18:36:52.130910] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:29:40.255 [2024-07-22 18:36:52.130930] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:29:40.255 [2024-07-22 18:36:52.130951] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:29:40.255 [2024-07-22 18:36:52.130972] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:29:40.255 [2024-07-22 18:36:52.130994] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:29:40.255 [2024-07-22 18:36:52.131014] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:29:40.255 [2024-07-22 18:36:52.131033] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:29:40.255 [2024-07-22 18:36:52.131055] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:29:40.255 [2024-07-22 18:36:52.131077] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:29:40.255 [2024-07-22 18:36:52.131099] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:29:40.255 [2024-07-22 18:36:52.131121] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:29:40.255 [2024-07-22 18:36:52.131142] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:29:40.255 [2024-07-22 18:36:52.131164] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:29:40.255 [2024-07-22 18:36:52.131187] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:29:40.255 [2024-07-22 18:36:52.131208] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:29:40.255 [2024-07-22 18:36:52.131229] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:29:40.255 18:36:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:29:43.536 18:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:43.536 [2024-07-22 18:36:55.401193] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:43.536 18:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:29:44.470 18:36:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:29:45.035 18:36:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 99142 00:29:50.301 0 00:29:50.302 18:37:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 99096 00:29:50.302 18:37:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 99096 ']' 00:29:50.302 18:37:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 99096 00:29:50.302 18:37:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:29:50.302 18:37:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:50.302 18:37:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 99096 00:29:50.302 killing process with pid 99096 00:29:50.302 18:37:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:50.302 18:37:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:50.302 18:37:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 99096' 00:29:50.302 18:37:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@967 -- # kill 99096 00:29:50.302 18:37:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # wait 99096 00:29:51.681 18:37:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:29:51.681 [2024-07-22 18:36:45.290031] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:29:51.682 [2024-07-22 18:36:45.290247] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99096 ] 00:29:51.682 [2024-07-22 18:36:45.468478] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:51.682 [2024-07-22 18:36:45.756531] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:51.682 Running I/O for 15 seconds... 00:29:51.682 [2024-07-22 18:36:48.368247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:49728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.682 [2024-07-22 18:36:48.368356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.682 [2024-07-22 18:36:48.368426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:49032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.682 [2024-07-22 18:36:48.368459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.682 [2024-07-22 18:36:48.368491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:49040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.682 [2024-07-22 18:36:48.368520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.682 [2024-07-22 18:36:48.368550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:49048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.682 [2024-07-22 18:36:48.368577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.682 [2024-07-22 18:36:48.368609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:49056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.682 [2024-07-22 18:36:48.368641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.682 [2024-07-22 18:36:48.368681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:49064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.682 [2024-07-22 18:36:48.368716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.682 [2024-07-22 18:36:48.368749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:49072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.682 [2024-07-22 18:36:48.368776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.682 [2024-07-22 18:36:48.368805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:49080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.682 [2024-07-22 18:36:48.368848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.682 [2024-07-22 18:36:48.368883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:49088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.682 [2024-07-22 18:36:48.368910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.682 [2024-07-22 18:36:48.368940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:49096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.682 [2024-07-22 18:36:48.368968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.682 [2024-07-22 18:36:48.368998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:49104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.682 [2024-07-22 18:36:48.369025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.682 [2024-07-22 18:36:48.369097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:49112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.682 [2024-07-22 18:36:48.369127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.682 [2024-07-22 18:36:48.369156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:49120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.682 [2024-07-22 18:36:48.369184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.682 [2024-07-22 18:36:48.369214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:49128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.682 [2024-07-22 18:36:48.369241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.682 [2024-07-22 18:36:48.369271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:49136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.682 [2024-07-22 18:36:48.369298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.682 [2024-07-22 18:36:48.369328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:49144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.682 [2024-07-22 18:36:48.369355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.682 [2024-07-22 18:36:48.369385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:49152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.682 [2024-07-22 18:36:48.369411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.682 [2024-07-22 18:36:48.369441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:49160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.682 [2024-07-22 18:36:48.369468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.682 [2024-07-22 18:36:48.369498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:49168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.682 [2024-07-22 18:36:48.369525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.682 [2024-07-22 18:36:48.369555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:49176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.682 [2024-07-22 18:36:48.369581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.682 [2024-07-22 18:36:48.369610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:49184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.682 [2024-07-22 18:36:48.369638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.682 [2024-07-22 18:36:48.369668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:49192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.682 [2024-07-22 18:36:48.369695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.682 [2024-07-22 18:36:48.369724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:49200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.682 [2024-07-22 18:36:48.369751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.682 [2024-07-22 18:36:48.369788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:49208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.682 [2024-07-22 18:36:48.369826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.682 [2024-07-22 18:36:48.369878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:49216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.682 [2024-07-22 18:36:48.369907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.682 [2024-07-22 18:36:48.369938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:49224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.682 [2024-07-22 18:36:48.369964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.682 [2024-07-22 18:36:48.369994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:49232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.682 [2024-07-22 18:36:48.370021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.682 [2024-07-22 18:36:48.370066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:49240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.682 [2024-07-22 18:36:48.370095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.682 [2024-07-22 18:36:48.370126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:49248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.682 [2024-07-22 18:36:48.370154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.682 [2024-07-22 18:36:48.370184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:49256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.682 [2024-07-22 18:36:48.370222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.682 [2024-07-22 18:36:48.370253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:49264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.682 [2024-07-22 18:36:48.370280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.682 [2024-07-22 18:36:48.370310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:49272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.682 [2024-07-22 18:36:48.370337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.682 [2024-07-22 18:36:48.370368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:49280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.682 [2024-07-22 18:36:48.370395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.682 [2024-07-22 18:36:48.370424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:49288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.682 [2024-07-22 18:36:48.370450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.682 [2024-07-22 18:36:48.370479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:49296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.682 [2024-07-22 18:36:48.370507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.682 [2024-07-22 18:36:48.370538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:49304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.682 [2024-07-22 18:36:48.370566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.682 [2024-07-22 18:36:48.370607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:49312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.682 [2024-07-22 18:36:48.370635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.682 [2024-07-22 18:36:48.370665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:49320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.683 [2024-07-22 18:36:48.370693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.683 [2024-07-22 18:36:48.370723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:49328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.683 [2024-07-22 18:36:48.370750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.683 [2024-07-22 18:36:48.370779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:49336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.683 [2024-07-22 18:36:48.370806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.683 [2024-07-22 18:36:48.370851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:49344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.683 [2024-07-22 18:36:48.370882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.683 [2024-07-22 18:36:48.370912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:49352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.683 [2024-07-22 18:36:48.370939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.683 [2024-07-22 18:36:48.370969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:49360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.683 [2024-07-22 18:36:48.370996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.683 [2024-07-22 18:36:48.371026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:49368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.683 [2024-07-22 18:36:48.371054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.683 [2024-07-22 18:36:48.371084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:49376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.683 [2024-07-22 18:36:48.371112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.683 [2024-07-22 18:36:48.371143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:49384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.683 [2024-07-22 18:36:48.371176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.683 [2024-07-22 18:36:48.371207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:49392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.683 [2024-07-22 18:36:48.371259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.683 [2024-07-22 18:36:48.371290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:49400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.683 [2024-07-22 18:36:48.371317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.683 [2024-07-22 18:36:48.371347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:49408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.683 [2024-07-22 18:36:48.371385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.683 [2024-07-22 18:36:48.371417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:49416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.683 [2024-07-22 18:36:48.371444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.683 [2024-07-22 18:36:48.371474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:49424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.683 [2024-07-22 18:36:48.371501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.683 [2024-07-22 18:36:48.371531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:49432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.683 [2024-07-22 18:36:48.371558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.683 [2024-07-22 18:36:48.371599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:49440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.683 [2024-07-22 18:36:48.371627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.683 [2024-07-22 18:36:48.371658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:49448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.683 [2024-07-22 18:36:48.371685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.683 [2024-07-22 18:36:48.371715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:49456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.683 [2024-07-22 18:36:48.371742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.683 [2024-07-22 18:36:48.371773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:49464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.683 [2024-07-22 18:36:48.371800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.683 [2024-07-22 18:36:48.371829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:49472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.683 [2024-07-22 18:36:48.371875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.683 [2024-07-22 18:36:48.371906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:49480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.683 [2024-07-22 18:36:48.371932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.683 [2024-07-22 18:36:48.371962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:49488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.683 [2024-07-22 18:36:48.371990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.683 [2024-07-22 18:36:48.372020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:49496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.683 [2024-07-22 18:36:48.372048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.683 [2024-07-22 18:36:48.372077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:49504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.683 [2024-07-22 18:36:48.372104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.683 [2024-07-22 18:36:48.372133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:49512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.683 [2024-07-22 18:36:48.372177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.683 [2024-07-22 18:36:48.372209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:49520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.683 [2024-07-22 18:36:48.372236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.683 [2024-07-22 18:36:48.372266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:49528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.683 [2024-07-22 18:36:48.372293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.683 [2024-07-22 18:36:48.372323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:49536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.683 [2024-07-22 18:36:48.372351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.683 [2024-07-22 18:36:48.372381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:49544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.683 [2024-07-22 18:36:48.372407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.683 [2024-07-22 18:36:48.372437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:49552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.683 [2024-07-22 18:36:48.372465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.683 [2024-07-22 18:36:48.372494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:49560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.683 [2024-07-22 18:36:48.372521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.683 [2024-07-22 18:36:48.372552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:49568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.683 [2024-07-22 18:36:48.372582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.683 [2024-07-22 18:36:48.372615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:49576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.683 [2024-07-22 18:36:48.372651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.683 [2024-07-22 18:36:48.372695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:49584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.683 [2024-07-22 18:36:48.372732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.683 [2024-07-22 18:36:48.372763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:49592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.683 [2024-07-22 18:36:48.372791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.683 [2024-07-22 18:36:48.372820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:49600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.683 [2024-07-22 18:36:48.372863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.683 [2024-07-22 18:36:48.372896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:49608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.683 [2024-07-22 18:36:48.372923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.683 [2024-07-22 18:36:48.372965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:49616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.683 [2024-07-22 18:36:48.372994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.683 [2024-07-22 18:36:48.373023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:49624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.683 [2024-07-22 18:36:48.373050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.683 [2024-07-22 18:36:48.373080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:49632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.683 [2024-07-22 18:36:48.373107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.684 [2024-07-22 18:36:48.373136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:49640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.684 [2024-07-22 18:36:48.373170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.684 [2024-07-22 18:36:48.373200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:49648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.684 [2024-07-22 18:36:48.373227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.684 [2024-07-22 18:36:48.373258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:49656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.684 [2024-07-22 18:36:48.373284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.684 [2024-07-22 18:36:48.373314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:49736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.684 [2024-07-22 18:36:48.373340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.684 [2024-07-22 18:36:48.373371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:49744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.684 [2024-07-22 18:36:48.373398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.684 [2024-07-22 18:36:48.373427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:49752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.684 [2024-07-22 18:36:48.373454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.684 [2024-07-22 18:36:48.373483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:49760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.684 [2024-07-22 18:36:48.373511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.684 [2024-07-22 18:36:48.373541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:49768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.684 [2024-07-22 18:36:48.373568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.684 [2024-07-22 18:36:48.373597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:49776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.684 [2024-07-22 18:36:48.373624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.684 [2024-07-22 18:36:48.373653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:49784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.684 [2024-07-22 18:36:48.373692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.684 [2024-07-22 18:36:48.373723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:49664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.684 [2024-07-22 18:36:48.373750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.684 [2024-07-22 18:36:48.373779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:49672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.684 [2024-07-22 18:36:48.373806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.684 [2024-07-22 18:36:48.373856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:49680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.684 [2024-07-22 18:36:48.373887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.684 [2024-07-22 18:36:48.373916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:49688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.684 [2024-07-22 18:36:48.373942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.684 [2024-07-22 18:36:48.373971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:49696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.684 [2024-07-22 18:36:48.373997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.684 [2024-07-22 18:36:48.374027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:49704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.684 [2024-07-22 18:36:48.374066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.684 [2024-07-22 18:36:48.374097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:49712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.684 [2024-07-22 18:36:48.374130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.684 [2024-07-22 18:36:48.374160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:49720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.684 [2024-07-22 18:36:48.374186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.684 [2024-07-22 18:36:48.374216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:49792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.684 [2024-07-22 18:36:48.374254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.684 [2024-07-22 18:36:48.374283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:49800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.684 [2024-07-22 18:36:48.374310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.684 [2024-07-22 18:36:48.374340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:49808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.684 [2024-07-22 18:36:48.374367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.684 [2024-07-22 18:36:48.374398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:49816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.684 [2024-07-22 18:36:48.374425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.684 [2024-07-22 18:36:48.374465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:49824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.684 [2024-07-22 18:36:48.374493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.684 [2024-07-22 18:36:48.374523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:49832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.684 [2024-07-22 18:36:48.374551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.684 [2024-07-22 18:36:48.374581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:49840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.684 [2024-07-22 18:36:48.374607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.684 [2024-07-22 18:36:48.374636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:49848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.684 [2024-07-22 18:36:48.374663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.684 [2024-07-22 18:36:48.374692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:49856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.684 [2024-07-22 18:36:48.374720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.684 [2024-07-22 18:36:48.374749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.684 [2024-07-22 18:36:48.374776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.684 [2024-07-22 18:36:48.374812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:49872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.684 [2024-07-22 18:36:48.374853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.684 [2024-07-22 18:36:48.374886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:49880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.684 [2024-07-22 18:36:48.374914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.684 [2024-07-22 18:36:48.374944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:49888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.684 [2024-07-22 18:36:48.374971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.684 [2024-07-22 18:36:48.375001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:49896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.684 [2024-07-22 18:36:48.375029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.684 [2024-07-22 18:36:48.375059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:49904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.684 [2024-07-22 18:36:48.375091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.684 [2024-07-22 18:36:48.375122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:49912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.684 [2024-07-22 18:36:48.375169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.684 [2024-07-22 18:36:48.375199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:49920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.684 [2024-07-22 18:36:48.375228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.684 [2024-07-22 18:36:48.375275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:49928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.684 [2024-07-22 18:36:48.375305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.684 [2024-07-22 18:36:48.375336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:49936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.684 [2024-07-22 18:36:48.375362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.684 [2024-07-22 18:36:48.375392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:49944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.684 [2024-07-22 18:36:48.375419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.684 [2024-07-22 18:36:48.375449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:49952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.684 [2024-07-22 18:36:48.375475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.684 [2024-07-22 18:36:48.375506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:49960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.685 [2024-07-22 18:36:48.375533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.685 [2024-07-22 18:36:48.375564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:49968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.685 [2024-07-22 18:36:48.375593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.685 [2024-07-22 18:36:48.375623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:49976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.685 [2024-07-22 18:36:48.375650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.685 [2024-07-22 18:36:48.375679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:49984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.685 [2024-07-22 18:36:48.375706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.685 [2024-07-22 18:36:48.375735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:49992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.685 [2024-07-22 18:36:48.375763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.685 [2024-07-22 18:36:48.375793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:50000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.685 [2024-07-22 18:36:48.375820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.685 [2024-07-22 18:36:48.375867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:50008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.685 [2024-07-22 18:36:48.375896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.685 [2024-07-22 18:36:48.375926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:50016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.685 [2024-07-22 18:36:48.375953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.685 [2024-07-22 18:36:48.375983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:50024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.685 [2024-07-22 18:36:48.376019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.685 [2024-07-22 18:36:48.376051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:50032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.685 [2024-07-22 18:36:48.376080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.685 [2024-07-22 18:36:48.376110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:50040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.685 [2024-07-22 18:36:48.376137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.685 [2024-07-22 18:36:48.376166] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b780 is same with the state(5) to be set 00:29:51.685 [2024-07-22 18:36:48.376206] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:51.685 [2024-07-22 18:36:48.376230] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:51.685 [2024-07-22 18:36:48.376254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50048 len:8 PRP1 0x0 PRP2 0x0 00:29:51.685 [2024-07-22 18:36:48.376290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.685 [2024-07-22 18:36:48.376675] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x61500002b780 was disconnected and freed. reset controller. 00:29:51.685 [2024-07-22 18:36:48.376716] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:29:51.685 [2024-07-22 18:36:48.376852] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:51.685 [2024-07-22 18:36:48.376891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.685 [2024-07-22 18:36:48.376922] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:51.685 [2024-07-22 18:36:48.376949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.685 [2024-07-22 18:36:48.376975] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:51.685 [2024-07-22 18:36:48.377000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.685 [2024-07-22 18:36:48.377027] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:51.685 [2024-07-22 18:36:48.377053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.685 [2024-07-22 18:36:48.377078] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.685 [2024-07-22 18:36:48.377171] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:29:51.685 [2024-07-22 18:36:48.382124] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.685 [2024-07-22 18:36:48.428971] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:51.685 [2024-07-22 18:36:52.127283] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:51.685 [2024-07-22 18:36:52.127386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.685 [2024-07-22 18:36:52.127416] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:51.685 [2024-07-22 18:36:52.127459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.685 [2024-07-22 18:36:52.127481] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:51.685 [2024-07-22 18:36:52.127499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.685 [2024-07-22 18:36:52.127518] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:51.685 [2024-07-22 18:36:52.127536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.685 [2024-07-22 18:36:52.127554] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ab00 is same with the state(5) to be set 00:29:51.685 [2024-07-22 18:36:52.132005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:86016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.685 [2024-07-22 18:36:52.132051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.685 [2024-07-22 18:36:52.132109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:86024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.685 [2024-07-22 18:36:52.132139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.685 [2024-07-22 18:36:52.132163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:86032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.685 [2024-07-22 18:36:52.132184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.685 [2024-07-22 18:36:52.132205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:86040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.685 [2024-07-22 18:36:52.132226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.685 [2024-07-22 18:36:52.132247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:86048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.685 [2024-07-22 18:36:52.132267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.685 [2024-07-22 18:36:52.132289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:86056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.685 [2024-07-22 18:36:52.132309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.685 [2024-07-22 18:36:52.132330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:86064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.685 [2024-07-22 18:36:52.132350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.685 [2024-07-22 18:36:52.132372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:86072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.685 [2024-07-22 18:36:52.132392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.685 [2024-07-22 18:36:52.132413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:86080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.685 [2024-07-22 18:36:52.132432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.685 [2024-07-22 18:36:52.132454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:86088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.685 [2024-07-22 18:36:52.132490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.685 [2024-07-22 18:36:52.132515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:86096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.685 [2024-07-22 18:36:52.132535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.685 [2024-07-22 18:36:52.132557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:86104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.685 [2024-07-22 18:36:52.132577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.685 [2024-07-22 18:36:52.132598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:86112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.685 [2024-07-22 18:36:52.132618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.685 [2024-07-22 18:36:52.132639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:86120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.685 [2024-07-22 18:36:52.132659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.685 [2024-07-22 18:36:52.132680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:86128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.685 [2024-07-22 18:36:52.132700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.685 [2024-07-22 18:36:52.132721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:86136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.685 [2024-07-22 18:36:52.132741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.686 [2024-07-22 18:36:52.132762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:86144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.686 [2024-07-22 18:36:52.132782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.686 [2024-07-22 18:36:52.132804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:86152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.686 [2024-07-22 18:36:52.132823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.686 [2024-07-22 18:36:52.132863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:86160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.686 [2024-07-22 18:36:52.132885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.686 [2024-07-22 18:36:52.132906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:86168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.686 [2024-07-22 18:36:52.132926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.686 [2024-07-22 18:36:52.132948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:86176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.686 [2024-07-22 18:36:52.132967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.686 [2024-07-22 18:36:52.132988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:86184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.686 [2024-07-22 18:36:52.133007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.686 [2024-07-22 18:36:52.133039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:86192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.686 [2024-07-22 18:36:52.133060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.686 [2024-07-22 18:36:52.133081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:86200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.686 [2024-07-22 18:36:52.133101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.686 [2024-07-22 18:36:52.133122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:86272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.686 [2024-07-22 18:36:52.133142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.686 [2024-07-22 18:36:52.133165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:86280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.686 [2024-07-22 18:36:52.133186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.686 [2024-07-22 18:36:52.133208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:86288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.686 [2024-07-22 18:36:52.133228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.686 [2024-07-22 18:36:52.133249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.686 [2024-07-22 18:36:52.133269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.686 [2024-07-22 18:36:52.133291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:86304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.686 [2024-07-22 18:36:52.133310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.686 [2024-07-22 18:36:52.133333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:86312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.686 [2024-07-22 18:36:52.133353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.686 [2024-07-22 18:36:52.133374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:86320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.686 [2024-07-22 18:36:52.133411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.686 [2024-07-22 18:36:52.133435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:86328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.686 [2024-07-22 18:36:52.133456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.686 [2024-07-22 18:36:52.133478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:86208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.686 [2024-07-22 18:36:52.133498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.686 [2024-07-22 18:36:52.133536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:86336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.686 [2024-07-22 18:36:52.133556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.686 [2024-07-22 18:36:52.133578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:86344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.686 [2024-07-22 18:36:52.133598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.686 [2024-07-22 18:36:52.133629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:86352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.686 [2024-07-22 18:36:52.133650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.686 [2024-07-22 18:36:52.133672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:86360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.686 [2024-07-22 18:36:52.133691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.686 [2024-07-22 18:36:52.133712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:86368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.686 [2024-07-22 18:36:52.133733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.686 [2024-07-22 18:36:52.133754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:86376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.686 [2024-07-22 18:36:52.133774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.686 [2024-07-22 18:36:52.133796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:86384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.686 [2024-07-22 18:36:52.133816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.686 [2024-07-22 18:36:52.133853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:86392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.686 [2024-07-22 18:36:52.133877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.686 [2024-07-22 18:36:52.133900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:86400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.686 [2024-07-22 18:36:52.133921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.686 [2024-07-22 18:36:52.133943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:86408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.686 [2024-07-22 18:36:52.133963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.686 [2024-07-22 18:36:52.133984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:86416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.686 [2024-07-22 18:36:52.134004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.686 [2024-07-22 18:36:52.134026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:86424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.686 [2024-07-22 18:36:52.134059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.686 [2024-07-22 18:36:52.134095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:86432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.686 [2024-07-22 18:36:52.134116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.686 [2024-07-22 18:36:52.134137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:86440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.686 [2024-07-22 18:36:52.134157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.686 [2024-07-22 18:36:52.134179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:86448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.687 [2024-07-22 18:36:52.134208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.687 [2024-07-22 18:36:52.134232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:86456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.687 [2024-07-22 18:36:52.134253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.687 [2024-07-22 18:36:52.134274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:86464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.687 [2024-07-22 18:36:52.134295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.687 [2024-07-22 18:36:52.134317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:86472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.687 [2024-07-22 18:36:52.134337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.687 [2024-07-22 18:36:52.134359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:86480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.687 [2024-07-22 18:36:52.134379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.687 [2024-07-22 18:36:52.134400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:86488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.687 [2024-07-22 18:36:52.134420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.687 [2024-07-22 18:36:52.134441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:86496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.687 [2024-07-22 18:36:52.134462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.687 [2024-07-22 18:36:52.134484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.687 [2024-07-22 18:36:52.134504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.687 [2024-07-22 18:36:52.134526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:86512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.687 [2024-07-22 18:36:52.134546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.687 [2024-07-22 18:36:52.134567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:86520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.687 [2024-07-22 18:36:52.134596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.687 [2024-07-22 18:36:52.134617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:86216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.687 [2024-07-22 18:36:52.134637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.687 [2024-07-22 18:36:52.134659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:86224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.687 [2024-07-22 18:36:52.134679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.687 [2024-07-22 18:36:52.134700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:86232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.687 [2024-07-22 18:36:52.134719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.687 [2024-07-22 18:36:52.134748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:86240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.687 [2024-07-22 18:36:52.134769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.687 [2024-07-22 18:36:52.134790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:86248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.687 [2024-07-22 18:36:52.134810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.687 [2024-07-22 18:36:52.134843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:86256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.687 [2024-07-22 18:36:52.134867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.687 [2024-07-22 18:36:52.134889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:86264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.687 [2024-07-22 18:36:52.134910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.687 [2024-07-22 18:36:52.134932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:86528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.687 [2024-07-22 18:36:52.134951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.687 [2024-07-22 18:36:52.134972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:86536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.687 [2024-07-22 18:36:52.134992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.687 [2024-07-22 18:36:52.135013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:86544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.687 [2024-07-22 18:36:52.135033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.687 [2024-07-22 18:36:52.135054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:86552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.687 [2024-07-22 18:36:52.135073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.687 [2024-07-22 18:36:52.135094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:86560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.687 [2024-07-22 18:36:52.135114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.687 [2024-07-22 18:36:52.135135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:86568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.687 [2024-07-22 18:36:52.135155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.687 [2024-07-22 18:36:52.135185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:86576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.687 [2024-07-22 18:36:52.135206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.687 [2024-07-22 18:36:52.135236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:86584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.687 [2024-07-22 18:36:52.135256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.687 [2024-07-22 18:36:52.135278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:86592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.687 [2024-07-22 18:36:52.135305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.687 [2024-07-22 18:36:52.135328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:86600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.687 [2024-07-22 18:36:52.135348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.687 [2024-07-22 18:36:52.135370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.687 [2024-07-22 18:36:52.135390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.687 [2024-07-22 18:36:52.135411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:86616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.687 [2024-07-22 18:36:52.135430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.687 [2024-07-22 18:36:52.135452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:86624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.687 [2024-07-22 18:36:52.135471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.687 [2024-07-22 18:36:52.135492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:86632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.687 [2024-07-22 18:36:52.135513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.687 [2024-07-22 18:36:52.135534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:86640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.687 [2024-07-22 18:36:52.135554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.687 [2024-07-22 18:36:52.135575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:86648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.687 [2024-07-22 18:36:52.135596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.687 [2024-07-22 18:36:52.135617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:86656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.687 [2024-07-22 18:36:52.135637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.687 [2024-07-22 18:36:52.135658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:86664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.687 [2024-07-22 18:36:52.135678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.687 [2024-07-22 18:36:52.135700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:86672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.687 [2024-07-22 18:36:52.135720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.687 [2024-07-22 18:36:52.135741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:86680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.687 [2024-07-22 18:36:52.135761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.687 [2024-07-22 18:36:52.135785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:86688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.687 [2024-07-22 18:36:52.135804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.687 [2024-07-22 18:36:52.135826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:86696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.687 [2024-07-22 18:36:52.135868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.687 [2024-07-22 18:36:52.135893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:86704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.687 [2024-07-22 18:36:52.135914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.687 [2024-07-22 18:36:52.135935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.688 [2024-07-22 18:36:52.135955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.688 [2024-07-22 18:36:52.135976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:86720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.688 [2024-07-22 18:36:52.135996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.688 [2024-07-22 18:36:52.136017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:86728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.688 [2024-07-22 18:36:52.136037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.688 [2024-07-22 18:36:52.136059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:86736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.688 [2024-07-22 18:36:52.136079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.688 [2024-07-22 18:36:52.136099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:86744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.688 [2024-07-22 18:36:52.136119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.688 [2024-07-22 18:36:52.136141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:86752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.688 [2024-07-22 18:36:52.136160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.688 [2024-07-22 18:36:52.136182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:86760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.688 [2024-07-22 18:36:52.136202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.688 [2024-07-22 18:36:52.136223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:86768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.688 [2024-07-22 18:36:52.136265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.688 [2024-07-22 18:36:52.136291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.688 [2024-07-22 18:36:52.136311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.688 [2024-07-22 18:36:52.136333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:86784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.688 [2024-07-22 18:36:52.136352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.688 [2024-07-22 18:36:52.136374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:86792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.688 [2024-07-22 18:36:52.136393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.688 [2024-07-22 18:36:52.136424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:86800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.688 [2024-07-22 18:36:52.136445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.688 [2024-07-22 18:36:52.136466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:86808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.688 [2024-07-22 18:36:52.136485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.688 [2024-07-22 18:36:52.136507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:86816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.688 [2024-07-22 18:36:52.136527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.688 [2024-07-22 18:36:52.136548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:86824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.688 [2024-07-22 18:36:52.136568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.688 [2024-07-22 18:36:52.136589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:86832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.688 [2024-07-22 18:36:52.136609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.688 [2024-07-22 18:36:52.136639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:86840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.688 [2024-07-22 18:36:52.136659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.688 [2024-07-22 18:36:52.136680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:86848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.688 [2024-07-22 18:36:52.136699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.688 [2024-07-22 18:36:52.136720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:86856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.688 [2024-07-22 18:36:52.136740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.688 [2024-07-22 18:36:52.136761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:86864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.688 [2024-07-22 18:36:52.136781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.688 [2024-07-22 18:36:52.136802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:86872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.688 [2024-07-22 18:36:52.136821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.688 [2024-07-22 18:36:52.136857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:86880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.688 [2024-07-22 18:36:52.136879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.688 [2024-07-22 18:36:52.136901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:86888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.688 [2024-07-22 18:36:52.136920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.688 [2024-07-22 18:36:52.136941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:86896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.688 [2024-07-22 18:36:52.136970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.688 [2024-07-22 18:36:52.136993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:86904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.688 [2024-07-22 18:36:52.137014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.688 [2024-07-22 18:36:52.137063] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:51.688 [2024-07-22 18:36:52.137096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86912 len:8 PRP1 0x0 PRP2 0x0 00:29:51.688 [2024-07-22 18:36:52.137118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.688 [2024-07-22 18:36:52.137141] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:51.688 [2024-07-22 18:36:52.137158] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:51.688 [2024-07-22 18:36:52.137175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86920 len:8 PRP1 0x0 PRP2 0x0 00:29:51.688 [2024-07-22 18:36:52.137194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.688 [2024-07-22 18:36:52.137213] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:51.688 [2024-07-22 18:36:52.137227] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:51.688 [2024-07-22 18:36:52.137243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86928 len:8 PRP1 0x0 PRP2 0x0 00:29:51.688 [2024-07-22 18:36:52.137260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.688 [2024-07-22 18:36:52.137277] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:51.688 [2024-07-22 18:36:52.137292] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:51.688 [2024-07-22 18:36:52.137307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86936 len:8 PRP1 0x0 PRP2 0x0 00:29:51.688 [2024-07-22 18:36:52.137325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.688 [2024-07-22 18:36:52.137342] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:51.688 [2024-07-22 18:36:52.137356] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:51.688 [2024-07-22 18:36:52.137371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86944 len:8 PRP1 0x0 PRP2 0x0 00:29:51.688 [2024-07-22 18:36:52.137389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.688 [2024-07-22 18:36:52.137406] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:51.688 [2024-07-22 18:36:52.137420] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:51.688 [2024-07-22 18:36:52.137435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86952 len:8 PRP1 0x0 PRP2 0x0 00:29:51.688 [2024-07-22 18:36:52.137454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.688 [2024-07-22 18:36:52.137471] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:51.688 [2024-07-22 18:36:52.137495] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:51.688 [2024-07-22 18:36:52.137511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86960 len:8 PRP1 0x0 PRP2 0x0 00:29:51.688 [2024-07-22 18:36:52.137529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.688 [2024-07-22 18:36:52.137556] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:51.688 [2024-07-22 18:36:52.137572] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:51.688 [2024-07-22 18:36:52.137588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86968 len:8 PRP1 0x0 PRP2 0x0 00:29:51.688 [2024-07-22 18:36:52.137606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.688 [2024-07-22 18:36:52.137623] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:51.688 [2024-07-22 18:36:52.137638] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:51.688 [2024-07-22 18:36:52.137653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86976 len:8 PRP1 0x0 PRP2 0x0 00:29:51.688 [2024-07-22 18:36:52.137671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.689 [2024-07-22 18:36:52.137688] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:51.689 [2024-07-22 18:36:52.137703] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:51.689 [2024-07-22 18:36:52.137724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86984 len:8 PRP1 0x0 PRP2 0x0 00:29:51.689 [2024-07-22 18:36:52.137743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.689 [2024-07-22 18:36:52.137761] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:51.689 [2024-07-22 18:36:52.137776] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:51.689 [2024-07-22 18:36:52.137791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86992 len:8 PRP1 0x0 PRP2 0x0 00:29:51.689 [2024-07-22 18:36:52.137808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.689 [2024-07-22 18:36:52.137826] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:51.689 [2024-07-22 18:36:52.137858] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:51.689 [2024-07-22 18:36:52.137875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87000 len:8 PRP1 0x0 PRP2 0x0 00:29:51.689 [2024-07-22 18:36:52.137894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.689 [2024-07-22 18:36:52.137912] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:51.689 [2024-07-22 18:36:52.137927] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:51.689 [2024-07-22 18:36:52.137942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87008 len:8 PRP1 0x0 PRP2 0x0 00:29:51.689 [2024-07-22 18:36:52.137961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.689 [2024-07-22 18:36:52.137978] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:51.689 [2024-07-22 18:36:52.137993] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:51.689 [2024-07-22 18:36:52.138008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87016 len:8 PRP1 0x0 PRP2 0x0 00:29:51.689 [2024-07-22 18:36:52.138025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.689 [2024-07-22 18:36:52.138055] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:51.689 [2024-07-22 18:36:52.138079] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:51.689 [2024-07-22 18:36:52.138095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87024 len:8 PRP1 0x0 PRP2 0x0 00:29:51.689 [2024-07-22 18:36:52.138125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.689 [2024-07-22 18:36:52.138144] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:51.689 [2024-07-22 18:36:52.138159] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:51.689 [2024-07-22 18:36:52.138175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87032 len:8 PRP1 0x0 PRP2 0x0 00:29:51.689 [2024-07-22 18:36:52.138193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.689 [2024-07-22 18:36:52.138479] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x61500002ba00 was disconnected and freed. reset controller. 00:29:51.689 [2024-07-22 18:36:52.138507] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:29:51.689 [2024-07-22 18:36:52.138529] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.689 [2024-07-22 18:36:52.138613] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:29:51.689 [2024-07-22 18:36:52.142701] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.689 [2024-07-22 18:36:52.189885] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:51.689 [2024-07-22 18:36:56.721851] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:51.689 [2024-07-22 18:36:56.721968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.689 [2024-07-22 18:36:56.722000] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:51.689 [2024-07-22 18:36:56.722019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.689 [2024-07-22 18:36:56.722057] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:51.689 [2024-07-22 18:36:56.722082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.689 [2024-07-22 18:36:56.722103] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:51.689 [2024-07-22 18:36:56.722121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.689 [2024-07-22 18:36:56.722141] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ab00 is same with the state(5) to be set 00:29:51.689 [2024-07-22 18:36:56.723529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:65456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.689 [2024-07-22 18:36:56.723577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.689 [2024-07-22 18:36:56.723620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:65464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.689 [2024-07-22 18:36:56.723645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.689 [2024-07-22 18:36:56.723669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:65472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.689 [2024-07-22 18:36:56.723691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.689 [2024-07-22 18:36:56.723714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:65480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.689 [2024-07-22 18:36:56.723772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.689 [2024-07-22 18:36:56.723809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:65488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.689 [2024-07-22 18:36:56.723857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.689 [2024-07-22 18:36:56.723894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:65496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.689 [2024-07-22 18:36:56.723928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.689 [2024-07-22 18:36:56.723953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:65504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.689 [2024-07-22 18:36:56.723974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.689 [2024-07-22 18:36:56.723996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:65512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.689 [2024-07-22 18:36:56.724039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.689 [2024-07-22 18:36:56.724063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:65072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.689 [2024-07-22 18:36:56.724084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.689 [2024-07-22 18:36:56.724106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:65080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.689 [2024-07-22 18:36:56.724126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.689 [2024-07-22 18:36:56.724161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:65088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.689 [2024-07-22 18:36:56.724181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.689 [2024-07-22 18:36:56.724203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:65096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.689 [2024-07-22 18:36:56.724223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.689 [2024-07-22 18:36:56.724246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:65104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.689 [2024-07-22 18:36:56.724266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.689 [2024-07-22 18:36:56.724288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:65112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.689 [2024-07-22 18:36:56.724308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.689 [2024-07-22 18:36:56.724329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:65120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.689 [2024-07-22 18:36:56.724349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.689 [2024-07-22 18:36:56.724371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:65128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.689 [2024-07-22 18:36:56.724391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.689 [2024-07-22 18:36:56.724413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:65136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.689 [2024-07-22 18:36:56.724452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.689 [2024-07-22 18:36:56.724484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:65144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.689 [2024-07-22 18:36:56.724513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.689 [2024-07-22 18:36:56.724543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:65152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.689 [2024-07-22 18:36:56.724570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.689 [2024-07-22 18:36:56.724605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:65160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.689 [2024-07-22 18:36:56.724632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.689 [2024-07-22 18:36:56.724661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:65168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.690 [2024-07-22 18:36:56.724688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.690 [2024-07-22 18:36:56.724718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:65176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.690 [2024-07-22 18:36:56.724745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.690 [2024-07-22 18:36:56.724774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:65184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.690 [2024-07-22 18:36:56.724802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.690 [2024-07-22 18:36:56.724848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:65192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.690 [2024-07-22 18:36:56.724880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.690 [2024-07-22 18:36:56.724911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:65200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.690 [2024-07-22 18:36:56.724939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.690 [2024-07-22 18:36:56.724968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:65208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.690 [2024-07-22 18:36:56.724996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.690 [2024-07-22 18:36:56.725025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:65216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.690 [2024-07-22 18:36:56.725052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.690 [2024-07-22 18:36:56.725081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:65224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.690 [2024-07-22 18:36:56.725108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.690 [2024-07-22 18:36:56.725138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:65232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.690 [2024-07-22 18:36:56.725168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.690 [2024-07-22 18:36:56.725215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:65240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.690 [2024-07-22 18:36:56.725237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.690 [2024-07-22 18:36:56.725259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:65248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.690 [2024-07-22 18:36:56.725281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.690 [2024-07-22 18:36:56.725303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:65256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.690 [2024-07-22 18:36:56.725324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.690 [2024-07-22 18:36:56.725345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:65264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.690 [2024-07-22 18:36:56.725366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.690 [2024-07-22 18:36:56.725388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:65520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.690 [2024-07-22 18:36:56.725410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.690 [2024-07-22 18:36:56.725432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:65528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.690 [2024-07-22 18:36:56.725453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.690 [2024-07-22 18:36:56.725485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:65536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.690 [2024-07-22 18:36:56.725514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.690 [2024-07-22 18:36:56.725544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:65544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.690 [2024-07-22 18:36:56.725571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.690 [2024-07-22 18:36:56.725601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:65552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.690 [2024-07-22 18:36:56.725628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.690 [2024-07-22 18:36:56.725657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:65560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.690 [2024-07-22 18:36:56.725685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.690 [2024-07-22 18:36:56.725715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:65568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.690 [2024-07-22 18:36:56.725743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.690 [2024-07-22 18:36:56.725772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:65576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.690 [2024-07-22 18:36:56.725799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.690 [2024-07-22 18:36:56.725843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:65584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.690 [2024-07-22 18:36:56.725887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.690 [2024-07-22 18:36:56.725919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:65592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.690 [2024-07-22 18:36:56.725947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.690 [2024-07-22 18:36:56.725976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:65600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.690 [2024-07-22 18:36:56.726003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.690 [2024-07-22 18:36:56.726051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:65608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.690 [2024-07-22 18:36:56.726087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.690 [2024-07-22 18:36:56.726113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:65616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.690 [2024-07-22 18:36:56.726136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.690 [2024-07-22 18:36:56.726158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:65624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.690 [2024-07-22 18:36:56.726179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.690 [2024-07-22 18:36:56.726201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:65632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.690 [2024-07-22 18:36:56.726221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.690 [2024-07-22 18:36:56.726242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:65640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.690 [2024-07-22 18:36:56.726262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.690 [2024-07-22 18:36:56.726283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.690 [2024-07-22 18:36:56.726304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.690 [2024-07-22 18:36:56.726325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:65656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.690 [2024-07-22 18:36:56.726346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.690 [2024-07-22 18:36:56.726366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:65664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.690 [2024-07-22 18:36:56.726387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.690 [2024-07-22 18:36:56.726410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:65672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.690 [2024-07-22 18:36:56.726430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.690 [2024-07-22 18:36:56.726451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:65680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.690 [2024-07-22 18:36:56.726471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.690 [2024-07-22 18:36:56.726503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:65688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.690 [2024-07-22 18:36:56.726526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.690 [2024-07-22 18:36:56.726556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:65696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.690 [2024-07-22 18:36:56.726578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.690 [2024-07-22 18:36:56.726600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:65704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.690 [2024-07-22 18:36:56.726628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.690 [2024-07-22 18:36:56.726653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:65712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.691 [2024-07-22 18:36:56.726674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.691 [2024-07-22 18:36:56.726695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:65720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.691 [2024-07-22 18:36:56.726715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.691 [2024-07-22 18:36:56.726737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:65728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.691 [2024-07-22 18:36:56.726758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.691 [2024-07-22 18:36:56.726780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:65736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.691 [2024-07-22 18:36:56.726802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.691 [2024-07-22 18:36:56.726850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:65744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.691 [2024-07-22 18:36:56.726882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.691 [2024-07-22 18:36:56.726911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:65752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.691 [2024-07-22 18:36:56.726939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.691 [2024-07-22 18:36:56.726969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:65760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.691 [2024-07-22 18:36:56.726996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.691 [2024-07-22 18:36:56.727025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:65768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.691 [2024-07-22 18:36:56.727052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.691 [2024-07-22 18:36:56.727081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:65776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.691 [2024-07-22 18:36:56.727109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.691 [2024-07-22 18:36:56.727139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:65784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.691 [2024-07-22 18:36:56.727165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.691 [2024-07-22 18:36:56.727208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:65792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.691 [2024-07-22 18:36:56.727237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.691 [2024-07-22 18:36:56.727267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:65800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.691 [2024-07-22 18:36:56.727295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.691 [2024-07-22 18:36:56.727324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:65808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.691 [2024-07-22 18:36:56.727351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.691 [2024-07-22 18:36:56.727382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:65816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.691 [2024-07-22 18:36:56.727415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.691 [2024-07-22 18:36:56.727448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:65824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.691 [2024-07-22 18:36:56.727495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.691 [2024-07-22 18:36:56.727525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:65832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.691 [2024-07-22 18:36:56.727555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.691 [2024-07-22 18:36:56.727585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:65840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.691 [2024-07-22 18:36:56.727618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.691 [2024-07-22 18:36:56.727648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:65848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.691 [2024-07-22 18:36:56.727676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.691 [2024-07-22 18:36:56.727706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:65856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.691 [2024-07-22 18:36:56.727734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.691 [2024-07-22 18:36:56.727767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:65864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.691 [2024-07-22 18:36:56.727798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.691 [2024-07-22 18:36:56.727829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:65872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.691 [2024-07-22 18:36:56.727878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.691 [2024-07-22 18:36:56.727909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:65880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.691 [2024-07-22 18:36:56.727937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.691 [2024-07-22 18:36:56.727966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:65888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.691 [2024-07-22 18:36:56.728008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.691 [2024-07-22 18:36:56.728039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:65896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.691 [2024-07-22 18:36:56.728066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.691 [2024-07-22 18:36:56.728096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:65904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.691 [2024-07-22 18:36:56.728123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.691 [2024-07-22 18:36:56.728152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:65912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.691 [2024-07-22 18:36:56.728180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.691 [2024-07-22 18:36:56.728209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:65920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.691 [2024-07-22 18:36:56.728237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.691 [2024-07-22 18:36:56.728265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:65928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.691 [2024-07-22 18:36:56.728292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.691 [2024-07-22 18:36:56.728323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:65936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.691 [2024-07-22 18:36:56.728350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.691 [2024-07-22 18:36:56.728379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:65944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.691 [2024-07-22 18:36:56.728407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.691 [2024-07-22 18:36:56.728436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:65952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.691 [2024-07-22 18:36:56.728464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.691 [2024-07-22 18:36:56.728493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:65960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.691 [2024-07-22 18:36:56.728522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.691 [2024-07-22 18:36:56.728551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:65968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.691 [2024-07-22 18:36:56.728580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.691 [2024-07-22 18:36:56.728610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:65976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.691 [2024-07-22 18:36:56.728638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.691 [2024-07-22 18:36:56.728669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:65984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.691 [2024-07-22 18:36:56.728701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.691 [2024-07-22 18:36:56.728737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:65992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.691 [2024-07-22 18:36:56.728759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.691 [2024-07-22 18:36:56.728781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:66000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.692 [2024-07-22 18:36:56.728802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.692 [2024-07-22 18:36:56.728823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:66008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.692 [2024-07-22 18:36:56.728857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.692 [2024-07-22 18:36:56.728881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:66016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.692 [2024-07-22 18:36:56.728902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.692 [2024-07-22 18:36:56.728924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:66024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.692 [2024-07-22 18:36:56.728944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.692 [2024-07-22 18:36:56.728965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:66032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.692 [2024-07-22 18:36:56.728986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.692 [2024-07-22 18:36:56.729015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:66040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.692 [2024-07-22 18:36:56.729036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.692 [2024-07-22 18:36:56.729058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:66048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.692 [2024-07-22 18:36:56.729079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.692 [2024-07-22 18:36:56.729103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:66056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.692 [2024-07-22 18:36:56.729134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.692 [2024-07-22 18:36:56.729163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:66064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.692 [2024-07-22 18:36:56.729190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.692 [2024-07-22 18:36:56.729219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:66072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.692 [2024-07-22 18:36:56.729247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.692 [2024-07-22 18:36:56.729276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:66080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.692 [2024-07-22 18:36:56.729303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.692 [2024-07-22 18:36:56.729332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:65272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.692 [2024-07-22 18:36:56.729369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.692 [2024-07-22 18:36:56.729401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:65280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.692 [2024-07-22 18:36:56.729430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.692 [2024-07-22 18:36:56.729459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:65288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.692 [2024-07-22 18:36:56.729492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.692 [2024-07-22 18:36:56.729522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:65296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.692 [2024-07-22 18:36:56.729549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.692 [2024-07-22 18:36:56.729578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:65304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.692 [2024-07-22 18:36:56.729612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.692 [2024-07-22 18:36:56.729640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:65312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.692 [2024-07-22 18:36:56.729661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.692 [2024-07-22 18:36:56.729683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:65320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.692 [2024-07-22 18:36:56.729712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.692 [2024-07-22 18:36:56.729748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:66088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.692 [2024-07-22 18:36:56.729778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.692 [2024-07-22 18:36:56.729807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:65328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.692 [2024-07-22 18:36:56.729854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.692 [2024-07-22 18:36:56.729883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:65336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.692 [2024-07-22 18:36:56.729905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.692 [2024-07-22 18:36:56.729928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:65344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.692 [2024-07-22 18:36:56.729949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.692 [2024-07-22 18:36:56.729970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:65352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.692 [2024-07-22 18:36:56.729991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.692 [2024-07-22 18:36:56.730013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:65360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.692 [2024-07-22 18:36:56.730047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.692 [2024-07-22 18:36:56.730085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:65368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.692 [2024-07-22 18:36:56.730126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.692 [2024-07-22 18:36:56.730158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:65376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.692 [2024-07-22 18:36:56.730186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.692 [2024-07-22 18:36:56.730216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:65384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.692 [2024-07-22 18:36:56.730242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.692 [2024-07-22 18:36:56.730272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:65392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.692 [2024-07-22 18:36:56.730299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.692 [2024-07-22 18:36:56.730328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:65400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.692 [2024-07-22 18:36:56.730356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.692 [2024-07-22 18:36:56.730386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:65408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.692 [2024-07-22 18:36:56.730414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.692 [2024-07-22 18:36:56.730443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:65416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.692 [2024-07-22 18:36:56.730470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.692 [2024-07-22 18:36:56.730500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:65424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.692 [2024-07-22 18:36:56.730527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.692 [2024-07-22 18:36:56.730555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:65432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.692 [2024-07-22 18:36:56.730581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.692 [2024-07-22 18:36:56.730610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:65440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.692 [2024-07-22 18:36:56.730637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.692 [2024-07-22 18:36:56.730664] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002c180 is same with the state(5) to be set 00:29:51.692 [2024-07-22 18:36:56.730697] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:51.692 [2024-07-22 18:36:56.730719] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:51.692 [2024-07-22 18:36:56.730743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:65448 len:8 PRP1 0x0 PRP2 0x0 00:29:51.692 [2024-07-22 18:36:56.730774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.692 [2024-07-22 18:36:56.731084] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x61500002c180 was disconnected and freed. reset controller. 00:29:51.692 [2024-07-22 18:36:56.731115] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:29:51.692 [2024-07-22 18:36:56.731149] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.692 [2024-07-22 18:36:56.735702] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.692 [2024-07-22 18:36:56.735773] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:29:51.692 [2024-07-22 18:36:56.783404] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:51.692 00:29:51.692 Latency(us) 00:29:51.692 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:51.692 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:51.692 Verification LBA range: start 0x0 length 0x4000 00:29:51.692 NVMe0n1 : 15.02 5963.02 23.29 265.65 0.00 20509.22 819.20 38844.97 00:29:51.693 =================================================================================================================== 00:29:51.693 Total : 5963.02 23.29 265.65 0.00 20509.22 819.20 38844.97 00:29:51.693 Received shutdown signal, test time was about 15.000000 seconds 00:29:51.693 00:29:51.693 Latency(us) 00:29:51.693 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:51.693 =================================================================================================================== 00:29:51.693 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:51.693 18:37:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:29:51.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:51.693 18:37:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:29:51.693 18:37:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:29:51.693 18:37:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=99352 00:29:51.693 18:37:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:29:51.693 18:37:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 99352 /var/tmp/bdevperf.sock 00:29:51.693 18:37:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 99352 ']' 00:29:51.693 18:37:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:51.693 18:37:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:51.693 18:37:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:51.693 18:37:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:51.693 18:37:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:53.083 18:37:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:53.083 18:37:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:29:53.083 18:37:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:53.341 [2024-07-22 18:37:05.171484] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:53.341 18:37:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:29:53.598 [2024-07-22 18:37:05.411668] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:29:53.598 18:37:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:53.856 NVMe0n1 00:29:53.856 18:37:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:54.421 00:29:54.421 18:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:54.679 00:29:54.679 18:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:54.679 18:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:29:54.937 18:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:55.195 18:37:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:29:58.503 18:37:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:58.503 18:37:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:29:58.503 18:37:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=99487 00:29:58.503 18:37:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:58.503 18:37:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 99487 00:29:59.878 0 00:29:59.878 18:37:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:29:59.878 [2024-07-22 18:37:03.728536] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:29:59.878 [2024-07-22 18:37:03.728771] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99352 ] 00:29:59.878 [2024-07-22 18:37:03.907632] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:59.878 [2024-07-22 18:37:04.214053] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:59.878 [2024-07-22 18:37:07.102064] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:29:59.878 [2024-07-22 18:37:07.102263] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:59.878 [2024-07-22 18:37:07.102301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.878 [2024-07-22 18:37:07.102331] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:59.878 [2024-07-22 18:37:07.102352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.878 [2024-07-22 18:37:07.102373] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:59.878 [2024-07-22 18:37:07.102393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.878 [2024-07-22 18:37:07.102415] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:59.878 [2024-07-22 18:37:07.102435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.878 [2024-07-22 18:37:07.102455] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:59.878 [2024-07-22 18:37:07.102553] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:59.878 [2024-07-22 18:37:07.102605] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:29:59.878 [2024-07-22 18:37:07.107306] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:59.878 Running I/O for 1 seconds... 00:29:59.878 00:29:59.879 Latency(us) 00:29:59.879 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:59.879 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:59.879 Verification LBA range: start 0x0 length 0x4000 00:29:59.879 NVMe0n1 : 1.02 6191.80 24.19 0.00 0.00 20562.28 3559.80 18945.86 00:29:59.879 =================================================================================================================== 00:29:59.879 Total : 6191.80 24.19 0.00 0.00 20562.28 3559.80 18945.86 00:29:59.879 18:37:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:29:59.879 18:37:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:00.137 18:37:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:00.702 18:37:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:00.702 18:37:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:30:00.960 18:37:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:01.217 18:37:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:30:04.508 18:37:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:04.508 18:37:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:30:04.508 18:37:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 99352 00:30:04.508 18:37:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 99352 ']' 00:30:04.508 18:37:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 99352 00:30:04.508 18:37:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:30:04.508 18:37:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:04.508 18:37:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 99352 00:30:04.508 killing process with pid 99352 00:30:04.508 18:37:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:04.508 18:37:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:04.508 18:37:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 99352' 00:30:04.508 18:37:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@967 -- # kill 99352 00:30:04.508 18:37:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # wait 99352 00:30:05.880 18:37:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:30:05.880 18:37:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:06.138 18:37:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:30:06.138 18:37:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:30:06.138 18:37:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:30:06.138 18:37:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:06.138 18:37:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:30:06.138 18:37:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:06.138 18:37:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:30:06.138 18:37:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:06.138 18:37:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:06.138 rmmod nvme_tcp 00:30:06.138 rmmod nvme_fabrics 00:30:06.138 rmmod nvme_keyring 00:30:06.138 18:37:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:06.138 18:37:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:30:06.138 18:37:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:30:06.138 18:37:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 98978 ']' 00:30:06.138 18:37:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 98978 00:30:06.138 18:37:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 98978 ']' 00:30:06.138 18:37:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 98978 00:30:06.138 18:37:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:30:06.138 18:37:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:06.138 18:37:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 98978 00:30:06.415 killing process with pid 98978 00:30:06.415 18:37:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:30:06.415 18:37:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:30:06.415 18:37:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 98978' 00:30:06.415 18:37:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@967 -- # kill 98978 00:30:06.415 18:37:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # wait 98978 00:30:07.787 18:37:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:07.787 18:37:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:07.787 18:37:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:07.787 18:37:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:07.787 18:37:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:07.787 18:37:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:07.787 18:37:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:07.787 18:37:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:07.787 18:37:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:30:07.787 ************************************ 00:30:07.787 END TEST nvmf_failover 00:30:07.787 ************************************ 00:30:07.787 00:30:07.787 real 0m38.355s 00:30:07.787 user 2m27.659s 00:30:07.787 sys 0m5.353s 00:30:07.787 18:37:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:07.787 18:37:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:07.787 18:37:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:30:07.787 18:37:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:30:07.787 18:37:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:30:07.787 18:37:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:07.787 18:37:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:07.787 ************************************ 00:30:07.787 START TEST nvmf_host_discovery 00:30:07.787 ************************************ 00:30:07.787 18:37:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:30:08.046 * Looking for test storage... 00:30:08.046 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:30:08.046 18:37:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:30:08.046 18:37:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:30:08.046 18:37:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:08.046 18:37:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:08.046 18:37:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:08.046 18:37:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:08.046 18:37:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:08.046 18:37:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:08.046 18:37:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:08.046 18:37:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:08.046 18:37:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:08.046 18:37:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:08.046 18:37:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:30:08.046 18:37:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=0b8484e2-e129-4a11-8748-0b3c728771da 00:30:08.046 18:37:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:08.046 18:37:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:08.046 18:37:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:30:08.046 18:37:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:08.046 18:37:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:30:08.046 18:37:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:08.046 18:37:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:08.046 18:37:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:08.046 18:37:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:08.046 18:37:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:08.046 18:37:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:08.046 18:37:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:30:08.046 18:37:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:08.046 18:37:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:30:08.046 18:37:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:08.046 18:37:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:08.046 18:37:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:08.046 18:37:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:08.046 18:37:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:08.046 18:37:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:08.046 18:37:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:08.046 18:37:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:08.046 18:37:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:30:08.046 18:37:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:30:08.046 18:37:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:30:08.046 18:37:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:30:08.046 18:37:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:30:08.046 18:37:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:30:08.046 18:37:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:30:08.046 18:37:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:08.046 18:37:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:08.046 18:37:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:08.046 18:37:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:08.046 18:37:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:08.046 18:37:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:08.046 18:37:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:08.046 18:37:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:08.046 18:37:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:30:08.046 18:37:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:30:08.046 18:37:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:30:08.046 18:37:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:30:08.046 18:37:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:30:08.046 18:37:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # nvmf_veth_init 00:30:08.046 18:37:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:08.046 18:37:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:08.046 18:37:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:30:08.046 18:37:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:30:08.046 18:37:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:30:08.046 18:37:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:30:08.046 18:37:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:30:08.046 18:37:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:08.046 18:37:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:30:08.046 18:37:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:30:08.046 18:37:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:30:08.046 18:37:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:30:08.046 18:37:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:30:08.046 18:37:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:30:08.046 Cannot find device "nvmf_tgt_br" 00:30:08.046 18:37:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@155 -- # true 00:30:08.046 18:37:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:30:08.046 Cannot find device "nvmf_tgt_br2" 00:30:08.046 18:37:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@156 -- # true 00:30:08.046 18:37:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:30:08.046 18:37:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:30:08.046 Cannot find device "nvmf_tgt_br" 00:30:08.046 18:37:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@158 -- # true 00:30:08.046 18:37:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:30:08.046 Cannot find device "nvmf_tgt_br2" 00:30:08.046 18:37:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@159 -- # true 00:30:08.046 18:37:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:30:08.046 18:37:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:30:08.046 18:37:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:30:08.046 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:08.046 18:37:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:30:08.046 18:37:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:30:08.046 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:08.046 18:37:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:30:08.046 18:37:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:30:08.047 18:37:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:30:08.047 18:37:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:30:08.047 18:37:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:30:08.047 18:37:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:30:08.305 18:37:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:30:08.305 18:37:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:30:08.305 18:37:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:30:08.305 18:37:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:30:08.305 18:37:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:30:08.305 18:37:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:30:08.305 18:37:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:30:08.305 18:37:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:30:08.305 18:37:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:30:08.305 18:37:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:30:08.305 18:37:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:30:08.305 18:37:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:30:08.305 18:37:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:30:08.305 18:37:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:30:08.305 18:37:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:30:08.305 18:37:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:30:08.305 18:37:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:30:08.305 18:37:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:30:08.305 18:37:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:30:08.305 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:08.305 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.084 ms 00:30:08.305 00:30:08.305 --- 10.0.0.2 ping statistics --- 00:30:08.305 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:08.305 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:30:08.305 18:37:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:30:08.305 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:30:08.305 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:30:08.305 00:30:08.305 --- 10.0.0.3 ping statistics --- 00:30:08.305 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:08.305 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:30:08.305 18:37:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:30:08.305 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:08.305 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:30:08.305 00:30:08.305 --- 10.0.0.1 ping statistics --- 00:30:08.305 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:08.305 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:30:08.305 18:37:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:08.305 18:37:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@433 -- # return 0 00:30:08.305 18:37:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:08.305 18:37:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:08.305 18:37:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:08.305 18:37:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:08.305 18:37:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:08.305 18:37:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:08.305 18:37:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:08.305 18:37:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:30:08.305 18:37:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:08.305 18:37:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:08.305 18:37:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:08.305 18:37:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=99820 00:30:08.305 18:37:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:30:08.305 18:37:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 99820 00:30:08.305 18:37:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 99820 ']' 00:30:08.305 18:37:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:08.305 18:37:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:08.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:08.305 18:37:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:08.305 18:37:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:08.305 18:37:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:08.563 [2024-07-22 18:37:20.355554] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:30:08.563 [2024-07-22 18:37:20.355728] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:08.563 [2024-07-22 18:37:20.522147] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:08.821 [2024-07-22 18:37:20.810991] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:08.821 [2024-07-22 18:37:20.811200] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:08.821 [2024-07-22 18:37:20.811261] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:08.821 [2024-07-22 18:37:20.811348] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:08.821 [2024-07-22 18:37:20.811403] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:08.821 [2024-07-22 18:37:20.811662] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:09.388 18:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:09.388 18:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:30:09.388 18:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:09.388 18:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:09.388 18:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:09.388 18:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:09.388 18:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:09.388 18:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:09.388 18:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:09.388 [2024-07-22 18:37:21.371051] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:09.388 18:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:09.388 18:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:30:09.388 18:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:09.388 18:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:09.388 [2024-07-22 18:37:21.379145] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:30:09.388 18:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:09.388 18:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:30:09.388 18:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:09.388 18:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:09.388 null0 00:30:09.388 18:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:09.388 18:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:30:09.388 18:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:09.388 18:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:09.647 null1 00:30:09.647 18:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:09.647 18:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:30:09.647 18:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:09.647 18:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:09.647 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:30:09.647 18:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:09.647 18:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=99870 00:30:09.647 18:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:30:09.647 18:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 99870 /tmp/host.sock 00:30:09.647 18:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 99870 ']' 00:30:09.647 18:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:30:09.647 18:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:09.647 18:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:30:09.647 18:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:09.647 18:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:09.647 [2024-07-22 18:37:21.552081] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:30:09.647 [2024-07-22 18:37:21.552643] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99870 ] 00:30:09.906 [2024-07-22 18:37:21.735320] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:10.168 [2024-07-22 18:37:22.053286] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:10.742 18:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:10.742 18:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:30:10.742 18:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:10.742 18:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:30:10.742 18:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:10.742 18:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:10.742 18:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:10.742 18:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:30:10.742 18:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:10.743 18:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:10.743 18:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:10.743 18:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:30:10.743 18:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:30:10.743 18:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:10.743 18:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:10.743 18:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:10.743 18:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:10.743 18:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:10.743 18:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:10.743 18:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:10.743 18:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:30:10.743 18:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:30:10.743 18:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:10.743 18:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:10.743 18:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:10.743 18:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:10.743 18:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:10.743 18:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:10.743 18:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:10.743 18:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:30:10.743 18:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:30:10.743 18:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:10.743 18:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:10.743 18:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:10.743 18:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:30:10.743 18:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:10.743 18:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:10.743 18:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:10.743 18:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:10.743 18:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:10.743 18:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:10.743 18:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:11.000 18:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:30:11.001 18:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:30:11.001 18:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:11.001 18:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:11.001 18:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:11.001 18:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:11.001 18:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:11.001 18:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:11.001 18:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:11.001 18:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:30:11.001 18:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:30:11.001 18:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:11.001 18:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:11.001 18:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:11.001 18:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:30:11.001 18:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:11.001 18:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:11.001 18:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:11.001 18:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:11.001 18:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:11.001 18:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:11.001 18:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:11.001 18:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:30:11.001 18:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:30:11.001 18:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:11.001 18:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:11.001 18:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:11.001 18:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:11.001 18:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:11.001 18:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:11.001 18:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:11.001 18:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:30:11.001 18:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:11.001 18:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:11.001 18:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:11.001 [2024-07-22 18:37:22.952294] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:11.001 18:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:11.001 18:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:30:11.001 18:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:11.001 18:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:11.001 18:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:11.001 18:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:11.001 18:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:11.001 18:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:11.001 18:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:11.001 18:37:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:30:11.001 18:37:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:30:11.001 18:37:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:11.001 18:37:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:11.001 18:37:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:11.001 18:37:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:11.001 18:37:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:11.001 18:37:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:11.259 18:37:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:11.259 18:37:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:30:11.259 18:37:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:30:11.259 18:37:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:30:11.259 18:37:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:30:11.259 18:37:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:30:11.259 18:37:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:11.259 18:37:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:11.259 18:37:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:30:11.259 18:37:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:30:11.259 18:37:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:30:11.259 18:37:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:30:11.259 18:37:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:11.259 18:37:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:11.259 18:37:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:11.259 18:37:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:30:11.259 18:37:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:30:11.259 18:37:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:30:11.259 18:37:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:11.259 18:37:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:30:11.259 18:37:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:11.259 18:37:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:11.259 18:37:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:11.259 18:37:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:30:11.259 18:37:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:30:11.259 18:37:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:11.259 18:37:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:11.259 18:37:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:30:11.259 18:37:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:30:11.259 18:37:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:11.259 18:37:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:11.259 18:37:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:11.259 18:37:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:11.259 18:37:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:11.259 18:37:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:11.259 18:37:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:11.259 18:37:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:30:11.259 18:37:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:30:11.825 [2024-07-22 18:37:23.581869] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:30:11.825 [2024-07-22 18:37:23.581940] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:30:11.825 [2024-07-22 18:37:23.581999] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:11.825 [2024-07-22 18:37:23.670125] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:30:11.825 [2024-07-22 18:37:23.734429] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:30:11.825 [2024-07-22 18:37:23.734491] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:30:12.390 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:12.390 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:30:12.390 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:30:12.390 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:12.390 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:12.390 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:12.390 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:12.390 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:12.390 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:12.390 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:12.390 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:12.390 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:12.390 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:30:12.390 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:30:12.390 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:12.390 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:12.390 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:30:12.390 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:30:12.391 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:12.391 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:12.391 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:12.391 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:12.391 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:12.391 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:12.391 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:12.391 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:30:12.391 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:12.391 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:30:12.391 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:30:12.391 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:12.391 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:12.391 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:30:12.391 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:30:12.391 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:30:12.391 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:12.391 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:30:12.391 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:30:12.391 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:12.391 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:30:12.391 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:12.391 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:30:12.391 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:12.391 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:30:12.391 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:30:12.391 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:30:12.391 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:30:12.391 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:12.391 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:12.391 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:30:12.391 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:30:12.391 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:30:12.391 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:30:12.391 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:12.391 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:12.391 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:12.649 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:30:12.649 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:30:12.649 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:30:12.649 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:12.649 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:30:12.649 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:12.649 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:12.649 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:12.649 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:30:12.649 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:30:12.649 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:12.649 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:12.649 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:30:12.649 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:30:12.649 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:12.649 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:12.649 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:12.649 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:12.649 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:12.649 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:12.649 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:12.649 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:12.649 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:12.649 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:30:12.649 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:30:12.649 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:30:12.649 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:30:12.649 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:12.649 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:12.649 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:30:12.649 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:30:12.649 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:30:12.649 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:30:12.649 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:12.649 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:12.649 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:12.649 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:30:12.649 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:30:12.649 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:30:12.649 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:12.649 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:30:12.649 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:12.649 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:12.649 [2024-07-22 18:37:24.562085] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:12.649 [2024-07-22 18:37:24.562743] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:30:12.649 [2024-07-22 18:37:24.562820] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:12.649 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:12.649 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:30:12.649 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:30:12.649 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:12.649 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:12.649 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:30:12.649 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:30:12.649 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:12.649 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:12.649 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:12.649 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:12.649 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:12.650 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:12.650 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:12.650 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:12.650 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:12.650 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:30:12.650 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:30:12.650 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:12.650 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:12.650 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:30:12.650 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:30:12.650 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:12.650 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:12.650 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:12.650 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:12.650 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:12.650 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:12.650 [2024-07-22 18:37:24.650026] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:30:12.907 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:12.907 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:12.907 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:12.907 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:30:12.907 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:30:12.907 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:12.907 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:12.907 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:30:12.907 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:30:12.907 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:30:12.907 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:30:12.907 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:12.907 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:30:12.907 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:12.907 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:30:12.907 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:12.908 [2024-07-22 18:37:24.714607] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:30:12.908 [2024-07-22 18:37:24.714660] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:30:12.908 [2024-07-22 18:37:24.714675] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:30:12.908 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:30:12.908 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:30:13.842 18:37:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:13.842 18:37:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:30:13.842 18:37:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:30:13.842 18:37:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:30:13.842 18:37:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:30:13.842 18:37:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:30:13.842 18:37:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:30:13.842 18:37:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:13.842 18:37:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:13.842 18:37:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:13.842 18:37:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:30:13.842 18:37:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:13.842 18:37:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:30:13.842 18:37:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:30:13.842 18:37:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:30:13.842 18:37:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:30:13.842 18:37:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:13.842 18:37:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:13.842 18:37:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:30:13.842 18:37:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:30:13.842 18:37:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:30:13.842 18:37:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:13.842 18:37:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:13.842 18:37:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:30:13.842 18:37:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:13.842 18:37:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:30:13.842 18:37:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:30:13.842 18:37:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:30:13.842 18:37:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:13.842 18:37:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:13.842 18:37:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:13.842 18:37:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:14.101 [2024-07-22 18:37:25.863293] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:30:14.101 [2024-07-22 18:37:25.863379] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:14.101 18:37:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:14.101 18:37:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:30:14.101 18:37:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:30:14.101 18:37:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:14.101 18:37:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:14.101 18:37:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:30:14.101 18:37:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:30:14.101 [2024-07-22 18:37:25.870904] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:14.101 [2024-07-22 18:37:25.870952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:14.101 [2024-07-22 18:37:25.870975] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:14.101 [2024-07-22 18:37:25.870990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:14.101 [2024-07-22 18:37:25.871004] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:14.101 [2024-07-22 18:37:25.871019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:14.101 [2024-07-22 18:37:25.871034] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:14.101 [2024-07-22 18:37:25.871048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:14.101 [2024-07-22 18:37:25.871062] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ab00 is same with the state(5) to be set 00:30:14.101 18:37:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:14.101 18:37:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:14.101 18:37:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:14.101 18:37:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:14.101 18:37:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:14.101 18:37:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:14.101 [2024-07-22 18:37:25.880834] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:30:14.101 18:37:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:14.101 [2024-07-22 18:37:25.890878] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:14.101 [2024-07-22 18:37:25.891077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.101 [2024-07-22 18:37:25.891122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002ab00 with addr=10.0.0.2, port=4420 00:30:14.101 [2024-07-22 18:37:25.891142] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ab00 is same with the state(5) to be set 00:30:14.101 [2024-07-22 18:37:25.891170] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:30:14.101 [2024-07-22 18:37:25.891202] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:14.101 [2024-07-22 18:37:25.891225] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:14.101 [2024-07-22 18:37:25.891242] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:14.101 [2024-07-22 18:37:25.891287] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:14.101 [2024-07-22 18:37:25.900982] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:14.101 [2024-07-22 18:37:25.901116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.101 [2024-07-22 18:37:25.901146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002ab00 with addr=10.0.0.2, port=4420 00:30:14.101 [2024-07-22 18:37:25.901162] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ab00 is same with the state(5) to be set 00:30:14.101 [2024-07-22 18:37:25.901186] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:30:14.101 [2024-07-22 18:37:25.901207] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:14.101 [2024-07-22 18:37:25.901220] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:14.101 [2024-07-22 18:37:25.901233] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:14.101 [2024-07-22 18:37:25.901255] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:14.101 [2024-07-22 18:37:25.911081] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:14.101 [2024-07-22 18:37:25.911233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.101 [2024-07-22 18:37:25.911263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002ab00 with addr=10.0.0.2, port=4420 00:30:14.101 [2024-07-22 18:37:25.911279] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ab00 is same with the state(5) to be set 00:30:14.101 [2024-07-22 18:37:25.911304] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:30:14.101 [2024-07-22 18:37:25.911340] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:14.101 [2024-07-22 18:37:25.911356] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:14.101 [2024-07-22 18:37:25.911387] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:14.101 [2024-07-22 18:37:25.911410] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:14.101 [2024-07-22 18:37:25.921208] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:14.101 [2024-07-22 18:37:25.921403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.101 [2024-07-22 18:37:25.921433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002ab00 with addr=10.0.0.2, port=4420 00:30:14.101 [2024-07-22 18:37:25.921449] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ab00 is same with the state(5) to be set 00:30:14.101 [2024-07-22 18:37:25.921488] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:30:14.101 [2024-07-22 18:37:25.921512] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:14.101 [2024-07-22 18:37:25.921525] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:14.101 [2024-07-22 18:37:25.921538] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:14.101 [2024-07-22 18:37:25.921560] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:14.101 18:37:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:14.101 18:37:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:14.101 18:37:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:30:14.101 18:37:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:30:14.101 18:37:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:14.101 18:37:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:14.101 18:37:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:30:14.101 18:37:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:30:14.101 18:37:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:14.101 18:37:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:14.101 18:37:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:14.101 18:37:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:14.101 18:37:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:14.101 18:37:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:14.101 [2024-07-22 18:37:25.932067] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:14.101 [2024-07-22 18:37:25.932189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.101 [2024-07-22 18:37:25.932219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002ab00 with addr=10.0.0.2, port=4420 00:30:14.101 [2024-07-22 18:37:25.932250] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ab00 is same with the state(5) to be set 00:30:14.101 [2024-07-22 18:37:25.932274] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:30:14.102 [2024-07-22 18:37:25.932295] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:14.102 [2024-07-22 18:37:25.932308] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:14.102 [2024-07-22 18:37:25.932321] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:14.102 [2024-07-22 18:37:25.932359] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:14.102 [2024-07-22 18:37:25.942153] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:14.102 [2024-07-22 18:37:25.942319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.102 [2024-07-22 18:37:25.942355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002ab00 with addr=10.0.0.2, port=4420 00:30:14.102 [2024-07-22 18:37:25.942373] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ab00 is same with the state(5) to be set 00:30:14.102 [2024-07-22 18:37:25.942397] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:30:14.102 [2024-07-22 18:37:25.942418] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:14.102 [2024-07-22 18:37:25.942431] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:14.102 [2024-07-22 18:37:25.942444] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:14.102 [2024-07-22 18:37:25.942466] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:14.102 [2024-07-22 18:37:25.949135] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:30:14.102 [2024-07-22 18:37:25.949207] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:30:14.102 18:37:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:14.102 18:37:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:14.102 18:37:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:14.102 18:37:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:30:14.102 18:37:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:30:14.102 18:37:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:14.102 18:37:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:14.102 18:37:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:30:14.102 18:37:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:30:14.102 18:37:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:30:14.102 18:37:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:30:14.102 18:37:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:14.102 18:37:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:14.102 18:37:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:30:14.102 18:37:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:30:14.102 18:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:14.102 18:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:30:14.102 18:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:14.102 18:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:30:14.102 18:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:30:14.102 18:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:30:14.102 18:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:30:14.102 18:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:14.102 18:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:14.102 18:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:30:14.102 18:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:30:14.102 18:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:30:14.102 18:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:30:14.102 18:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:14.102 18:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:14.102 18:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:14.102 18:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:30:14.102 18:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:30:14.102 18:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:30:14.102 18:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:14.102 18:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:30:14.102 18:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:14.102 18:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:14.360 18:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:14.360 18:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:30:14.360 18:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:30:14.360 18:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:14.360 18:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:14.360 18:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:30:14.360 18:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:30:14.360 18:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:14.360 18:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:14.360 18:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:14.360 18:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:14.360 18:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:14.360 18:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:14.360 18:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:14.360 18:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:30:14.360 18:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:14.360 18:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:30:14.360 18:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:30:14.360 18:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:14.360 18:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:14.360 18:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:30:14.360 18:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:30:14.360 18:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:14.360 18:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:14.360 18:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:14.360 18:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:14.360 18:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:14.360 18:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:14.360 18:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:14.360 18:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:30:14.360 18:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:14.360 18:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:30:14.360 18:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:30:14.360 18:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:30:14.360 18:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:30:14.360 18:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:14.360 18:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:14.360 18:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:30:14.360 18:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:30:14.360 18:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:30:14.360 18:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:30:14.360 18:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:14.360 18:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:14.360 18:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:14.360 18:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:30:14.360 18:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:30:14.360 18:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:30:14.360 18:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:14.360 18:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:14.360 18:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:14.360 18:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:15.732 [2024-07-22 18:37:27.333125] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:30:15.732 [2024-07-22 18:37:27.333214] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:30:15.732 [2024-07-22 18:37:27.333259] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:15.732 [2024-07-22 18:37:27.419372] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:30:15.732 [2024-07-22 18:37:27.489498] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:30:15.732 [2024-07-22 18:37:27.489601] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:30:15.732 18:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:15.732 18:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:15.732 18:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:30:15.732 18:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:15.732 18:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:30:15.732 18:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:15.732 18:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:30:15.733 18:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:15.733 18:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:15.733 18:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:15.733 18:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:15.733 2024/07/22 18:37:27 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:30:15.733 request: 00:30:15.733 { 00:30:15.733 "method": "bdev_nvme_start_discovery", 00:30:15.733 "params": { 00:30:15.733 "name": "nvme", 00:30:15.733 "trtype": "tcp", 00:30:15.733 "traddr": "10.0.0.2", 00:30:15.733 "adrfam": "ipv4", 00:30:15.733 "trsvcid": "8009", 00:30:15.733 "hostnqn": "nqn.2021-12.io.spdk:test", 00:30:15.733 "wait_for_attach": true 00:30:15.733 } 00:30:15.733 } 00:30:15.733 Got JSON-RPC error response 00:30:15.733 GoRPCClient: error on JSON-RPC call 00:30:15.733 18:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:30:15.733 18:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:30:15.733 18:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:15.733 18:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:15.733 18:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:15.733 18:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:30:15.733 18:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:30:15.733 18:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:30:15.733 18:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:15.733 18:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:15.733 18:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:30:15.733 18:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:30:15.733 18:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:15.733 18:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:30:15.733 18:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:30:15.733 18:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:15.733 18:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:15.733 18:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:15.733 18:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:15.733 18:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:15.733 18:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:15.733 18:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:15.733 18:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:15.733 18:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:15.733 18:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:30:15.733 18:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:15.733 18:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:30:15.733 18:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:15.733 18:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:30:15.733 18:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:15.733 18:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:15.733 18:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:15.733 18:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:15.733 2024/07/22 18:37:27 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:30:15.733 request: 00:30:15.733 { 00:30:15.733 "method": "bdev_nvme_start_discovery", 00:30:15.733 "params": { 00:30:15.733 "name": "nvme_second", 00:30:15.733 "trtype": "tcp", 00:30:15.733 "traddr": "10.0.0.2", 00:30:15.733 "adrfam": "ipv4", 00:30:15.733 "trsvcid": "8009", 00:30:15.733 "hostnqn": "nqn.2021-12.io.spdk:test", 00:30:15.733 "wait_for_attach": true 00:30:15.733 } 00:30:15.733 } 00:30:15.733 Got JSON-RPC error response 00:30:15.733 GoRPCClient: error on JSON-RPC call 00:30:15.733 18:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:30:15.733 18:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:30:15.733 18:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:15.733 18:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:15.733 18:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:15.733 18:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:30:15.733 18:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:30:15.733 18:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:15.733 18:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:30:15.733 18:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:15.733 18:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:30:15.733 18:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:30:15.733 18:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:15.733 18:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:30:15.733 18:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:30:15.733 18:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:15.733 18:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:15.733 18:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:15.733 18:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:15.733 18:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:15.733 18:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:15.991 18:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:15.991 18:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:15.991 18:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:30:15.991 18:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:30:15.991 18:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:30:15.991 18:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:30:15.991 18:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:15.991 18:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:30:15.991 18:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:15.991 18:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:30:15.991 18:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:15.991 18:37:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:16.926 [2024-07-22 18:37:28.782496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.926 [2024-07-22 18:37:28.782639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002bc80 with addr=10.0.0.2, port=8010 00:30:16.926 [2024-07-22 18:37:28.782710] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:30:16.926 [2024-07-22 18:37:28.782728] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:30:16.926 [2024-07-22 18:37:28.782746] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:30:17.861 [2024-07-22 18:37:29.782505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.861 [2024-07-22 18:37:29.782647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002bf00 with addr=10.0.0.2, port=8010 00:30:17.861 [2024-07-22 18:37:29.782721] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:30:17.861 [2024-07-22 18:37:29.782740] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:30:17.861 [2024-07-22 18:37:29.782773] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:30:18.795 [2024-07-22 18:37:30.782081] bdev_nvme.c:7026:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:30:18.795 2024/07/22 18:37:30 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 attach_timeout_ms:3000 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8010 trtype:tcp wait_for_attach:%!s(bool=false)], err: error received for bdev_nvme_start_discovery method, err: Code=-110 Msg=Connection timed out 00:30:18.795 request: 00:30:18.795 { 00:30:18.795 "method": "bdev_nvme_start_discovery", 00:30:18.795 "params": { 00:30:18.795 "name": "nvme_second", 00:30:18.795 "trtype": "tcp", 00:30:18.795 "traddr": "10.0.0.2", 00:30:18.795 "adrfam": "ipv4", 00:30:18.795 "trsvcid": "8010", 00:30:18.795 "hostnqn": "nqn.2021-12.io.spdk:test", 00:30:18.795 "wait_for_attach": false, 00:30:18.795 "attach_timeout_ms": 3000 00:30:18.795 } 00:30:18.795 } 00:30:18.795 Got JSON-RPC error response 00:30:18.795 GoRPCClient: error on JSON-RPC call 00:30:18.795 18:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:30:18.795 18:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:30:18.795 18:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:18.795 18:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:18.795 18:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:18.795 18:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:30:18.795 18:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:30:18.795 18:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:18.795 18:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:30:18.795 18:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:30:18.795 18:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:30:18.795 18:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:18.795 18:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:19.054 18:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:30:19.054 18:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:30:19.054 18:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 99870 00:30:19.054 18:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:30:19.054 18:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:19.054 18:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:30:19.054 18:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:19.054 18:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:30:19.054 18:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:19.054 18:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:19.054 rmmod nvme_tcp 00:30:19.054 rmmod nvme_fabrics 00:30:19.054 rmmod nvme_keyring 00:30:19.054 18:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:19.054 18:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:30:19.054 18:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:30:19.054 18:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 99820 ']' 00:30:19.054 18:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 99820 00:30:19.054 18:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 99820 ']' 00:30:19.054 18:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 99820 00:30:19.054 18:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:30:19.054 18:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:19.054 18:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 99820 00:30:19.054 18:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:30:19.054 killing process with pid 99820 00:30:19.054 18:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:30:19.054 18:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 99820' 00:30:19.054 18:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 99820 00:30:19.054 18:37:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 99820 00:30:20.460 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:20.460 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:20.461 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:20.461 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:20.461 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:20.461 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:20.461 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:20.461 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:20.461 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:30:20.461 00:30:20.461 real 0m12.600s 00:30:20.461 user 0m24.586s 00:30:20.461 sys 0m2.073s 00:30:20.461 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:20.461 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:20.461 ************************************ 00:30:20.461 END TEST nvmf_host_discovery 00:30:20.461 ************************************ 00:30:20.461 18:37:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:30:20.461 18:37:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:30:20.461 18:37:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:30:20.461 18:37:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:20.461 18:37:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:20.461 ************************************ 00:30:20.461 START TEST nvmf_host_multipath_status 00:30:20.461 ************************************ 00:30:20.461 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:30:20.719 * Looking for test storage... 00:30:20.719 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:30:20.720 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:30:20.720 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:30:20.720 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:20.720 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:20.720 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:20.720 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:20.720 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:20.720 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:20.720 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:20.720 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:20.720 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:20.720 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:20.720 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:30:20.720 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=0b8484e2-e129-4a11-8748-0b3c728771da 00:30:20.720 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:20.720 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:20.720 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:30:20.720 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:20.720 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:30:20.720 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:20.720 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:20.720 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:20.720 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:20.720 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:20.720 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:20.720 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:30:20.720 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:20.720 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:30:20.720 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:20.720 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:20.720 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:20.720 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:20.720 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:20.720 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:20.720 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:20.720 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:20.720 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:30:20.720 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:30:20.720 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:20.720 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:30:20.720 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:20.720 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:30:20.720 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:30:20.720 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:20.720 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:20.720 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:20.720 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:20.720 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:20.720 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:20.720 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:20.720 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:20.720 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:30:20.720 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:30:20.720 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:30:20.720 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:30:20.720 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:30:20.720 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # nvmf_veth_init 00:30:20.720 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:20.720 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:20.720 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:30:20.720 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:30:20.720 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:30:20.720 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:30:20.720 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:30:20.720 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:20.720 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:30:20.720 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:30:20.720 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:30:20.720 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:30:20.720 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:30:20.720 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:30:20.720 Cannot find device "nvmf_tgt_br" 00:30:20.720 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # true 00:30:20.720 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:30:20.720 Cannot find device "nvmf_tgt_br2" 00:30:20.720 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # true 00:30:20.720 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:30:20.720 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:30:20.720 Cannot find device "nvmf_tgt_br" 00:30:20.720 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # true 00:30:20.720 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:30:20.720 Cannot find device "nvmf_tgt_br2" 00:30:20.720 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # true 00:30:20.720 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:30:20.720 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:30:20.720 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:30:20.720 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:20.720 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:30:20.720 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:30:20.720 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:20.720 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:30:20.720 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:30:20.720 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:30:20.720 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:30:20.720 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:30:20.720 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:30:20.720 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:30:20.978 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:30:20.978 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:30:20.978 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:30:20.978 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:30:20.978 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:30:20.978 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:30:20.978 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:30:20.978 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:30:20.978 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:30:20.978 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:30:20.978 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:30:20.978 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:30:20.978 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:30:20.978 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:30:20.978 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:30:20.978 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:30:20.978 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:30:20.978 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:30:20.978 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:20.978 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.122 ms 00:30:20.978 00:30:20.978 --- 10.0.0.2 ping statistics --- 00:30:20.978 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:20.978 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:30:20.978 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:30:20.978 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:30:20.978 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:30:20.978 00:30:20.978 --- 10.0.0.3 ping statistics --- 00:30:20.978 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:20.978 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:30:20.978 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:30:20.978 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:20.979 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:30:20.979 00:30:20.979 --- 10.0.0.1 ping statistics --- 00:30:20.979 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:20.979 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:30:20.979 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:20.979 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@433 -- # return 0 00:30:20.979 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:20.979 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:20.979 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:20.979 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:20.979 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:20.979 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:20.979 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:20.979 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:30:20.979 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:20.979 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:20.979 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:20.979 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=100359 00:30:20.979 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:30:20.979 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 100359 00:30:20.979 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 100359 ']' 00:30:20.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:20.979 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:20.979 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:20.979 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:20.979 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:20.979 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:21.236 [2024-07-22 18:37:33.004241] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:30:21.236 [2024-07-22 18:37:33.004561] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:21.237 [2024-07-22 18:37:33.180784] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:21.494 [2024-07-22 18:37:33.478314] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:21.494 [2024-07-22 18:37:33.478447] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:21.494 [2024-07-22 18:37:33.478466] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:21.494 [2024-07-22 18:37:33.478482] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:21.494 [2024-07-22 18:37:33.478494] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:21.494 [2024-07-22 18:37:33.478668] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:21.494 [2024-07-22 18:37:33.478693] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:22.059 18:37:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:22.059 18:37:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:30:22.059 18:37:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:22.059 18:37:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:22.059 18:37:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:22.059 18:37:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:22.059 18:37:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=100359 00:30:22.059 18:37:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:22.318 [2024-07-22 18:37:34.289755] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:22.318 18:37:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:30:22.909 Malloc0 00:30:22.910 18:37:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:30:22.910 18:37:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:23.167 18:37:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:23.425 [2024-07-22 18:37:35.341389] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:23.425 18:37:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:23.683 [2024-07-22 18:37:35.629569] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:23.683 18:37:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=100463 00:30:23.683 18:37:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:30:23.683 18:37:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:30:23.683 18:37:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 100463 /var/tmp/bdevperf.sock 00:30:23.684 18:37:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 100463 ']' 00:30:23.684 18:37:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:23.684 18:37:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:23.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:23.684 18:37:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:23.684 18:37:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:23.684 18:37:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:25.057 18:37:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:25.057 18:37:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:30:25.057 18:37:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:30:25.057 18:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:30:25.626 Nvme0n1 00:30:25.626 18:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:30:25.884 Nvme0n1 00:30:25.884 18:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:30:25.884 18:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:30:27.785 18:37:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:30:27.785 18:37:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:30:28.350 18:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:30:28.607 18:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:30:29.540 18:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:30:29.540 18:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:30:29.540 18:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:29.540 18:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:29.798 18:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:29.798 18:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:30:29.798 18:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:29.798 18:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:30.055 18:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:30.055 18:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:30.055 18:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:30.055 18:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:30.312 18:37:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:30.312 18:37:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:30.312 18:37:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:30.312 18:37:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:30.570 18:37:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:30.570 18:37:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:30.570 18:37:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:30.570 18:37:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:30.828 18:37:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:30.828 18:37:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:30.828 18:37:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:30.828 18:37:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:31.086 18:37:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:31.086 18:37:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:30:31.086 18:37:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:30:31.345 18:37:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:30:31.603 18:37:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:30:32.976 18:37:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:30:32.976 18:37:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:30:32.976 18:37:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:32.976 18:37:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:32.976 18:37:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:32.976 18:37:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:30:32.976 18:37:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:32.976 18:37:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:33.233 18:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:33.233 18:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:33.233 18:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:33.233 18:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:33.800 18:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:33.800 18:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:33.800 18:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:33.800 18:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:34.059 18:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:34.059 18:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:34.059 18:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:34.059 18:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:34.329 18:37:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:34.329 18:37:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:34.329 18:37:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:34.329 18:37:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:34.587 18:37:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:34.587 18:37:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:30:34.587 18:37:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:30:34.845 18:37:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:30:35.103 18:37:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:30:36.037 18:37:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:30:36.037 18:37:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:30:36.037 18:37:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:36.037 18:37:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:36.294 18:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:36.294 18:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:30:36.294 18:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:36.294 18:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:36.552 18:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:36.552 18:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:36.552 18:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:36.552 18:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:36.810 18:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:36.810 18:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:36.810 18:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:36.810 18:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:37.073 18:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:37.073 18:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:37.073 18:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:37.073 18:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:37.331 18:37:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:37.331 18:37:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:37.331 18:37:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:37.331 18:37:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:37.589 18:37:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:37.589 18:37:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:30:37.589 18:37:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:30:37.847 18:37:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:30:38.105 18:37:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:30:39.039 18:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:30:39.039 18:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:30:39.039 18:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:39.039 18:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:39.297 18:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:39.297 18:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:30:39.297 18:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:39.297 18:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:39.555 18:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:39.555 18:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:39.555 18:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:39.555 18:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:40.121 18:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:40.121 18:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:40.121 18:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:40.121 18:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:40.379 18:37:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:40.379 18:37:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:40.379 18:37:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:40.379 18:37:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:40.637 18:37:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:40.637 18:37:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:30:40.637 18:37:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:40.637 18:37:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:40.895 18:37:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:40.895 18:37:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:30:40.895 18:37:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:30:41.153 18:37:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:30:41.421 18:37:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:30:42.357 18:37:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:30:42.357 18:37:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:30:42.357 18:37:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:42.357 18:37:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:42.924 18:37:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:42.924 18:37:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:30:42.924 18:37:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:42.924 18:37:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:43.186 18:37:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:43.186 18:37:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:43.186 18:37:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:43.186 18:37:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:43.445 18:37:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:43.445 18:37:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:43.445 18:37:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:43.445 18:37:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:43.704 18:37:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:43.704 18:37:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:30:43.704 18:37:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:43.704 18:37:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:43.963 18:37:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:43.963 18:37:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:30:43.963 18:37:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:43.963 18:37:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:44.529 18:37:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:44.529 18:37:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:30:44.529 18:37:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:30:44.529 18:37:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:30:45.098 18:37:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:30:46.092 18:37:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:30:46.092 18:37:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:30:46.092 18:37:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:46.092 18:37:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:46.092 18:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:46.092 18:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:30:46.092 18:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:46.092 18:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:46.661 18:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:46.661 18:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:46.661 18:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:46.661 18:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:46.661 18:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:46.661 18:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:46.661 18:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:46.661 18:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:46.920 18:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:46.920 18:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:30:46.920 18:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:46.920 18:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:47.178 18:37:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:47.178 18:37:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:47.179 18:37:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:47.179 18:37:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:47.746 18:37:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:47.746 18:37:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:30:47.746 18:37:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:30:47.746 18:37:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:30:48.005 18:38:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:30:48.572 18:38:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:30:49.505 18:38:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:30:49.505 18:38:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:30:49.505 18:38:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:49.505 18:38:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:49.764 18:38:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:49.764 18:38:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:30:49.764 18:38:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:49.764 18:38:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:50.021 18:38:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:50.022 18:38:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:50.022 18:38:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:50.022 18:38:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:50.279 18:38:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:50.279 18:38:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:50.279 18:38:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:50.279 18:38:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:50.538 18:38:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:50.538 18:38:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:50.538 18:38:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:50.538 18:38:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:50.797 18:38:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:50.797 18:38:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:50.797 18:38:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:50.797 18:38:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:51.142 18:38:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:51.142 18:38:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:30:51.142 18:38:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:30:51.400 18:38:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:30:51.659 18:38:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:30:52.617 18:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:30:52.617 18:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:30:52.617 18:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:52.617 18:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:52.875 18:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:52.875 18:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:30:52.875 18:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:52.875 18:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:53.133 18:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:53.133 18:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:53.133 18:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:53.133 18:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:53.392 18:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:53.392 18:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:53.392 18:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:53.392 18:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:53.650 18:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:53.650 18:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:53.650 18:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:53.650 18:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:53.908 18:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:53.908 18:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:53.908 18:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:53.908 18:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:54.166 18:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:54.166 18:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:30:54.166 18:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:30:54.425 18:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:30:54.683 18:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:30:55.624 18:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:30:55.624 18:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:30:55.624 18:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:55.624 18:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:56.190 18:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:56.190 18:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:30:56.190 18:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:56.190 18:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:56.190 18:38:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:56.190 18:38:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:56.190 18:38:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:56.190 18:38:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:56.755 18:38:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:56.755 18:38:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:56.755 18:38:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:56.755 18:38:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:57.014 18:38:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:57.014 18:38:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:57.014 18:38:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:57.014 18:38:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:57.272 18:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:57.272 18:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:57.272 18:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:57.272 18:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:57.530 18:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:57.530 18:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:30:57.530 18:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:30:57.788 18:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:30:58.046 18:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:30:58.980 18:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:30:58.980 18:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:30:58.980 18:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:58.980 18:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:59.547 18:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:59.547 18:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:30:59.547 18:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:59.547 18:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:59.805 18:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:59.805 18:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:59.805 18:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:59.805 18:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:00.064 18:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:00.064 18:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:00.064 18:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:00.064 18:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:00.322 18:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:00.322 18:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:00.322 18:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:00.322 18:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:00.581 18:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:00.581 18:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:31:00.581 18:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:00.581 18:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:00.839 18:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:00.839 18:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 100463 00:31:00.839 18:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 100463 ']' 00:31:00.839 18:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 100463 00:31:00.839 18:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:31:00.839 18:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:00.839 18:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 100463 00:31:00.839 killing process with pid 100463 00:31:00.839 18:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:31:00.839 18:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:31:00.839 18:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 100463' 00:31:00.839 18:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 100463 00:31:00.839 18:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 100463 00:31:01.780 Connection closed with partial response: 00:31:01.780 00:31:01.780 00:31:02.062 18:38:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 100463 00:31:02.062 18:38:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:31:02.062 [2024-07-22 18:37:35.781064] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:31:02.062 [2024-07-22 18:37:35.781427] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100463 ] 00:31:02.062 [2024-07-22 18:37:35.949968] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:02.062 [2024-07-22 18:37:36.229777] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:31:02.062 Running I/O for 90 seconds... 00:31:02.062 [2024-07-22 18:37:53.076947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:28888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.062 [2024-07-22 18:37:53.077074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:02.062 [2024-07-22 18:37:53.077190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:28896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.062 [2024-07-22 18:37:53.077228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:02.062 [2024-07-22 18:37:53.077270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:28904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.062 [2024-07-22 18:37:53.077297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:02.062 [2024-07-22 18:37:53.077335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:28912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.062 [2024-07-22 18:37:53.077361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:02.062 [2024-07-22 18:37:53.077403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:28920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.062 [2024-07-22 18:37:53.077446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:02.062 [2024-07-22 18:37:53.077487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:28928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.062 [2024-07-22 18:37:53.077514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:02.062 [2024-07-22 18:37:53.077550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:28936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.062 [2024-07-22 18:37:53.077576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:02.062 [2024-07-22 18:37:53.077612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:28944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.062 [2024-07-22 18:37:53.077637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:02.062 [2024-07-22 18:37:53.077673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:28952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.062 [2024-07-22 18:37:53.077698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:02.062 [2024-07-22 18:37:53.077735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:28960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.062 [2024-07-22 18:37:53.077761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:02.062 [2024-07-22 18:37:53.077798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:28968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.062 [2024-07-22 18:37:53.077871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:02.062 [2024-07-22 18:37:53.077915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:28976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.062 [2024-07-22 18:37:53.077943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:02.062 [2024-07-22 18:37:53.077980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.062 [2024-07-22 18:37:53.078006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:02.062 [2024-07-22 18:37:53.078056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:28992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.062 [2024-07-22 18:37:53.078085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:02.062 [2024-07-22 18:37:53.078124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:29000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.062 [2024-07-22 18:37:53.078149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:02.062 [2024-07-22 18:37:53.078186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:29008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.062 [2024-07-22 18:37:53.078211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:02.062 [2024-07-22 18:37:53.078262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:29016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.062 [2024-07-22 18:37:53.078287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:02.062 [2024-07-22 18:37:53.078324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:29024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.062 [2024-07-22 18:37:53.078349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:02.062 [2024-07-22 18:37:53.078385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:29032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.062 [2024-07-22 18:37:53.078410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:02.062 [2024-07-22 18:37:53.078446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:29040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.062 [2024-07-22 18:37:53.078471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:02.063 [2024-07-22 18:37:53.078508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:29048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.063 [2024-07-22 18:37:53.078533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:02.063 [2024-07-22 18:37:53.078569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:29056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.063 [2024-07-22 18:37:53.078594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:02.063 [2024-07-22 18:37:53.078631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:29064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.063 [2024-07-22 18:37:53.078655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:02.063 [2024-07-22 18:37:53.078706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:29072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.063 [2024-07-22 18:37:53.078734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:02.063 [2024-07-22 18:37:53.078772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:29080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.063 [2024-07-22 18:37:53.078797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:02.063 [2024-07-22 18:37:53.078849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:29088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.063 [2024-07-22 18:37:53.078878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:02.063 [2024-07-22 18:37:53.078916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.063 [2024-07-22 18:37:53.078942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:02.063 [2024-07-22 18:37:53.078981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:29104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.063 [2024-07-22 18:37:53.079007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:02.063 [2024-07-22 18:37:53.079044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:29112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.063 [2024-07-22 18:37:53.079072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:02.063 [2024-07-22 18:37:53.079130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:29120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.063 [2024-07-22 18:37:53.079159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:02.063 [2024-07-22 18:37:53.079196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:29128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.063 [2024-07-22 18:37:53.079221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:02.063 [2024-07-22 18:37:53.079258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:29136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.063 [2024-07-22 18:37:53.079282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:02.063 [2024-07-22 18:37:53.079321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:29144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.063 [2024-07-22 18:37:53.079347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:02.063 [2024-07-22 18:37:53.079384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:29152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.063 [2024-07-22 18:37:53.079409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:02.063 [2024-07-22 18:37:53.079471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:29160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.063 [2024-07-22 18:37:53.079497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:02.063 [2024-07-22 18:37:53.079560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:29168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.063 [2024-07-22 18:37:53.079589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:02.063 [2024-07-22 18:37:53.079626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:29176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.063 [2024-07-22 18:37:53.079652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:02.063 [2024-07-22 18:37:53.079690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:29184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.063 [2024-07-22 18:37:53.079715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:02.063 [2024-07-22 18:37:53.079753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:29192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.063 [2024-07-22 18:37:53.079778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:02.063 [2024-07-22 18:37:53.082389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:29200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.063 [2024-07-22 18:37:53.082432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:02.063 [2024-07-22 18:37:53.083015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:29208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.063 [2024-07-22 18:37:53.083061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:02.063 [2024-07-22 18:37:53.083121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:29216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.063 [2024-07-22 18:37:53.083150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:02.063 [2024-07-22 18:37:53.083200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:29224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.063 [2024-07-22 18:37:53.083228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:02.063 [2024-07-22 18:37:53.083276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:29232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.063 [2024-07-22 18:37:53.083301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:02.063 [2024-07-22 18:37:53.083347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:29240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.063 [2024-07-22 18:37:53.083373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:02.063 [2024-07-22 18:37:53.083419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.063 [2024-07-22 18:37:53.083445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:02.063 [2024-07-22 18:37:53.083491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:29256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.063 [2024-07-22 18:37:53.083517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:02.064 [2024-07-22 18:37:53.083563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:29264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.064 [2024-07-22 18:37:53.083606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:02.064 [2024-07-22 18:37:53.083655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:29272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.064 [2024-07-22 18:37:53.083682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:02.064 [2024-07-22 18:37:53.083729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:29280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.064 [2024-07-22 18:37:53.083754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:02.064 [2024-07-22 18:37:53.083801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:29288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.064 [2024-07-22 18:37:53.083827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:02.064 [2024-07-22 18:37:53.083895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:29296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.064 [2024-07-22 18:37:53.083953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:02.064 [2024-07-22 18:37:53.084003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:29304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.064 [2024-07-22 18:37:53.084029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:02.064 [2024-07-22 18:37:53.084077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:29312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.064 [2024-07-22 18:37:53.084104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:02.064 [2024-07-22 18:37:53.084150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.064 [2024-07-22 18:37:53.084176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:02.064 [2024-07-22 18:37:53.084222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:29328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.064 [2024-07-22 18:37:53.084249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:02.064 [2024-07-22 18:37:53.084301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:29336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.064 [2024-07-22 18:37:53.084333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:02.064 [2024-07-22 18:37:53.084381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:29344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.064 [2024-07-22 18:37:53.084407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:02.064 [2024-07-22 18:37:53.084454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:29352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.064 [2024-07-22 18:37:53.084480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:02.064 [2024-07-22 18:37:53.084526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:29360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.064 [2024-07-22 18:37:53.084564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:02.064 [2024-07-22 18:37:53.084613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:29368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.064 [2024-07-22 18:37:53.084641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:02.064 [2024-07-22 18:37:53.084688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:29376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.064 [2024-07-22 18:37:53.084714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:02.064 [2024-07-22 18:37:53.084762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:29384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.064 [2024-07-22 18:37:53.084788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:02.064 [2024-07-22 18:37:53.084949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:29392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.064 [2024-07-22 18:37:53.084986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:02.064 [2024-07-22 18:37:53.085057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.064 [2024-07-22 18:37:53.085091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:02.064 [2024-07-22 18:37:53.085142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:29408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.064 [2024-07-22 18:37:53.085168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:02.064 [2024-07-22 18:37:53.085217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:29416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.064 [2024-07-22 18:37:53.085243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:02.064 [2024-07-22 18:37:53.085293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:29424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.064 [2024-07-22 18:37:53.085319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:02.064 [2024-07-22 18:37:53.085366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:29432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.064 [2024-07-22 18:37:53.085402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:02.064 [2024-07-22 18:37:53.085466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:29440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.064 [2024-07-22 18:37:53.085494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:02.064 [2024-07-22 18:37:53.085543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.064 [2024-07-22 18:37:53.085568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:02.064 [2024-07-22 18:37:53.085617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:29456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.064 [2024-07-22 18:37:53.085643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:02.064 [2024-07-22 18:37:53.085706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:29464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.064 [2024-07-22 18:37:53.085734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:02.064 [2024-07-22 18:37:53.085783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.064 [2024-07-22 18:37:53.085809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:02.064 [2024-07-22 18:37:53.085875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:29480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.065 [2024-07-22 18:37:53.085905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:02.065 [2024-07-22 18:37:53.085956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:29488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.065 [2024-07-22 18:37:53.085982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:02.065 [2024-07-22 18:38:09.877042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:40176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.065 [2024-07-22 18:38:09.877151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:02.065 [2024-07-22 18:38:09.877211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:40192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.065 [2024-07-22 18:38:09.877239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:02.065 [2024-07-22 18:38:09.877280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:40208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.065 [2024-07-22 18:38:09.877314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:02.065 [2024-07-22 18:38:09.877350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:40224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.065 [2024-07-22 18:38:09.877373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:02.065 [2024-07-22 18:38:09.877405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:40240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.065 [2024-07-22 18:38:09.877427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:02.065 [2024-07-22 18:38:09.877459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:40256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.065 [2024-07-22 18:38:09.877511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:02.065 [2024-07-22 18:38:09.877545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:40272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.065 [2024-07-22 18:38:09.877567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:02.065 [2024-07-22 18:38:09.877599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:40288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.065 [2024-07-22 18:38:09.877621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:02.065 [2024-07-22 18:38:09.877680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:39736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.065 [2024-07-22 18:38:09.877710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:02.065 [2024-07-22 18:38:09.877741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:39768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.065 [2024-07-22 18:38:09.877762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:02.065 [2024-07-22 18:38:09.877794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:39800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.065 [2024-07-22 18:38:09.877815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:02.065 [2024-07-22 18:38:09.877864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:40296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.065 [2024-07-22 18:38:09.877888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:02.065 [2024-07-22 18:38:09.877920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:40312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.065 [2024-07-22 18:38:09.877941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:02.065 [2024-07-22 18:38:09.877973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:40328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.065 [2024-07-22 18:38:09.877993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:02.065 [2024-07-22 18:38:09.878036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:40344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.065 [2024-07-22 18:38:09.878062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:02.065 [2024-07-22 18:38:09.878095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:40360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.065 [2024-07-22 18:38:09.878116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:02.065 [2024-07-22 18:38:09.878149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:39848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.065 [2024-07-22 18:38:09.878170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:02.065 [2024-07-22 18:38:09.878216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:39880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.065 [2024-07-22 18:38:09.878236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:02.065 [2024-07-22 18:38:09.878506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:39728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.065 [2024-07-22 18:38:09.878541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:02.065 [2024-07-22 18:38:09.878577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:39760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.065 [2024-07-22 18:38:09.878600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:02.065 [2024-07-22 18:38:09.878633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:39792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.065 [2024-07-22 18:38:09.878668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:02.065 [2024-07-22 18:38:09.878704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:39824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.065 [2024-07-22 18:38:09.878727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:02.065 [2024-07-22 18:38:09.878760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:39856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.065 [2024-07-22 18:38:09.878782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:02.065 [2024-07-22 18:38:09.878815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:39888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.065 [2024-07-22 18:38:09.878852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:02.065 [2024-07-22 18:38:09.878889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:40376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.065 [2024-07-22 18:38:09.878911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:02.065 [2024-07-22 18:38:09.878944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:40392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.065 [2024-07-22 18:38:09.878966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:02.066 [2024-07-22 18:38:09.878998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:40408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.066 [2024-07-22 18:38:09.879019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:02.066 [2024-07-22 18:38:09.879050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:40424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.066 [2024-07-22 18:38:09.879071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:02.066 [2024-07-22 18:38:09.879103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:40440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.066 [2024-07-22 18:38:09.879124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:02.066 [2024-07-22 18:38:09.879156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:39912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.066 [2024-07-22 18:38:09.879178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:02.066 [2024-07-22 18:38:09.879210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:39944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.066 [2024-07-22 18:38:09.879232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:02.066 [2024-07-22 18:38:09.879266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:39904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.066 [2024-07-22 18:38:09.879303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:02.066 [2024-07-22 18:38:09.879340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:39936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.066 [2024-07-22 18:38:09.879374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:02.066 [2024-07-22 18:38:09.879419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:39968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.066 [2024-07-22 18:38:09.879441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:02.066 [2024-07-22 18:38:09.881671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:39992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.066 [2024-07-22 18:38:09.881725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:02.066 [2024-07-22 18:38:09.881774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:40024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.066 [2024-07-22 18:38:09.881799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:02.066 [2024-07-22 18:38:09.881848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:40056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.066 [2024-07-22 18:38:09.881874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:02.066 [2024-07-22 18:38:09.881908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:40088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.066 [2024-07-22 18:38:09.881930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:02.066 [2024-07-22 18:38:09.881962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:40120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.066 [2024-07-22 18:38:09.881983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:02.066 [2024-07-22 18:38:09.882014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:40152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.066 [2024-07-22 18:38:09.882051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:02.066 [2024-07-22 18:38:09.882086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:40456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.066 [2024-07-22 18:38:09.882108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:02.066 [2024-07-22 18:38:09.882140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:40472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.066 [2024-07-22 18:38:09.882161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:02.066 [2024-07-22 18:38:09.882194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:40488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.066 [2024-07-22 18:38:09.882214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:02.066 [2024-07-22 18:38:09.882246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:40504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.066 [2024-07-22 18:38:09.882267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:02.066 [2024-07-22 18:38:09.882308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:40520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.066 [2024-07-22 18:38:09.882343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:02.066 [2024-07-22 18:38:09.882378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:40536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.066 [2024-07-22 18:38:09.882400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:02.066 [2024-07-22 18:38:09.882432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:40552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.066 [2024-07-22 18:38:09.882452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:02.066 [2024-07-22 18:38:09.882483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:40568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.066 [2024-07-22 18:38:09.882503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:02.066 [2024-07-22 18:38:09.882535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:40184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.066 [2024-07-22 18:38:09.882556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:02.066 [2024-07-22 18:38:09.882587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:40216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.066 [2024-07-22 18:38:09.882608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:02.066 [2024-07-22 18:38:09.882653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:40248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.066 [2024-07-22 18:38:09.882674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:02.066 [2024-07-22 18:38:09.882704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:40280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.066 [2024-07-22 18:38:09.882724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:02.066 [2024-07-22 18:38:09.882771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:40592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.066 [2024-07-22 18:38:09.882791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:02.066 [2024-07-22 18:38:09.882822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:40000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.066 [2024-07-22 18:38:09.882843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:02.066 [2024-07-22 18:38:09.882899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:40032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.066 [2024-07-22 18:38:09.882924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:02.066 [2024-07-22 18:38:09.882956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:40064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.066 [2024-07-22 18:38:09.882976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:02.066 [2024-07-22 18:38:09.883008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:40096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.067 [2024-07-22 18:38:09.883030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:02.067 [2024-07-22 18:38:09.883072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:40128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.067 [2024-07-22 18:38:09.883095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:02.067 [2024-07-22 18:38:09.883128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:40160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.067 [2024-07-22 18:38:09.883149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:02.067 [2024-07-22 18:38:09.884082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:40608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.067 [2024-07-22 18:38:09.884119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:02.067 [2024-07-22 18:38:09.884162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:40624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.067 [2024-07-22 18:38:09.884186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:02.067 [2024-07-22 18:38:09.884218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:40640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.067 [2024-07-22 18:38:09.884240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:02.067 [2024-07-22 18:38:09.884302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:40656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.067 [2024-07-22 18:38:09.884323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:02.067 [2024-07-22 18:38:09.884355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:40672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.067 [2024-07-22 18:38:09.884375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:02.067 [2024-07-22 18:38:09.884406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:40688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.067 [2024-07-22 18:38:09.884426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:02.067 [2024-07-22 18:38:09.884456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:40704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.067 [2024-07-22 18:38:09.884477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:02.067 [2024-07-22 18:38:09.884506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:40720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.067 [2024-07-22 18:38:09.884527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:02.067 [2024-07-22 18:38:09.884557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:40320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.067 [2024-07-22 18:38:09.884578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:02.067 [2024-07-22 18:38:09.884608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:40352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.067 [2024-07-22 18:38:09.884629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:02.067 [2024-07-22 18:38:09.884690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:40736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.067 [2024-07-22 18:38:09.884746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:02.067 [2024-07-22 18:38:09.884778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:40752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.067 [2024-07-22 18:38:09.884799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:02.067 [2024-07-22 18:38:09.884830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:40768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.067 [2024-07-22 18:38:09.884851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:02.067 [2024-07-22 18:38:09.884882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:40192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.067 [2024-07-22 18:38:09.884920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:02.067 [2024-07-22 18:38:09.884954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:40224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.067 [2024-07-22 18:38:09.884975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:02.067 [2024-07-22 18:38:09.885008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:40256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.067 [2024-07-22 18:38:09.885030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:02.067 [2024-07-22 18:38:09.885075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:40288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.067 [2024-07-22 18:38:09.885096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:02.067 [2024-07-22 18:38:09.885126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:39768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.067 [2024-07-22 18:38:09.885146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:02.067 [2024-07-22 18:38:09.885176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:40296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.067 [2024-07-22 18:38:09.885197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:02.067 [2024-07-22 18:38:09.885227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:40328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.067 [2024-07-22 18:38:09.885247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:02.067 [2024-07-22 18:38:09.885277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:40360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.067 [2024-07-22 18:38:09.885297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:02.067 [2024-07-22 18:38:09.885328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:39880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.067 [2024-07-22 18:38:09.885348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:02.067 [2024-07-22 18:38:09.885378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:39760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.067 [2024-07-22 18:38:09.885425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:02.067 [2024-07-22 18:38:09.885458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:39824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.067 [2024-07-22 18:38:09.885481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:02.067 [2024-07-22 18:38:09.885524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:39888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.067 [2024-07-22 18:38:09.885546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:02.067 [2024-07-22 18:38:09.885577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:40392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.067 [2024-07-22 18:38:09.885607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:02.067 [2024-07-22 18:38:09.885638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:40424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.067 [2024-07-22 18:38:09.885659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:02.068 [2024-07-22 18:38:09.885689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:39912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.068 [2024-07-22 18:38:09.885710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:02.068 [2024-07-22 18:38:09.885741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:39904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.068 [2024-07-22 18:38:09.885763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:02.068 [2024-07-22 18:38:09.885795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:39968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.068 [2024-07-22 18:38:09.885816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:02.068 [2024-07-22 18:38:09.887308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:40384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.068 [2024-07-22 18:38:09.887344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:02.068 [2024-07-22 18:38:09.887400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:40416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.068 [2024-07-22 18:38:09.887424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:02.068 [2024-07-22 18:38:09.887456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:40448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.068 [2024-07-22 18:38:09.887478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:02.068 [2024-07-22 18:38:09.887509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:40784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.068 [2024-07-22 18:38:09.887530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:02.068 [2024-07-22 18:38:09.887561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.068 [2024-07-22 18:38:09.887600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:02.068 [2024-07-22 18:38:09.887635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:40816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.068 [2024-07-22 18:38:09.887657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:02.068 [2024-07-22 18:38:09.887688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:40832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.068 [2024-07-22 18:38:09.887709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:02.068 [2024-07-22 18:38:09.887754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:40848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.068 [2024-07-22 18:38:09.887774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:02.068 [2024-07-22 18:38:09.887805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:40864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.068 [2024-07-22 18:38:09.887825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:02.068 [2024-07-22 18:38:09.887855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:40024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.068 [2024-07-22 18:38:09.887889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:02.068 [2024-07-22 18:38:09.887925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:40088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.068 [2024-07-22 18:38:09.887945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:02.068 [2024-07-22 18:38:09.887975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:40152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.068 [2024-07-22 18:38:09.887995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:02.068 [2024-07-22 18:38:09.888025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:40472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.068 [2024-07-22 18:38:09.888045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:02.068 [2024-07-22 18:38:09.888074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:40504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.068 [2024-07-22 18:38:09.888093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:02.068 [2024-07-22 18:38:09.888122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:40536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.068 [2024-07-22 18:38:09.888142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:02.068 [2024-07-22 18:38:09.888172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:40568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.068 [2024-07-22 18:38:09.888209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:02.068 [2024-07-22 18:38:09.888239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:40216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.068 [2024-07-22 18:38:09.888259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:02.068 [2024-07-22 18:38:09.888301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:40280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.068 [2024-07-22 18:38:09.888324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:02.068 [2024-07-22 18:38:09.888355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:40000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.068 [2024-07-22 18:38:09.888375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:02.068 [2024-07-22 18:38:09.888406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:40064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.068 [2024-07-22 18:38:09.888426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:02.068 [2024-07-22 18:38:09.888457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:40128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.068 [2024-07-22 18:38:09.888478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:02.068 [2024-07-22 18:38:09.888507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:40464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.068 [2024-07-22 18:38:09.888528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:02.068 [2024-07-22 18:38:09.888558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:40496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.068 [2024-07-22 18:38:09.888579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:02.068 [2024-07-22 18:38:09.888609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:40528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.069 [2024-07-22 18:38:09.888630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:02.069 [2024-07-22 18:38:09.888660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:40560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.069 [2024-07-22 18:38:09.888688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:02.069 [2024-07-22 18:38:09.888734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:40584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.069 [2024-07-22 18:38:09.888754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:02.069 [2024-07-22 18:38:09.888784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:40624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.069 [2024-07-22 18:38:09.888803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:02.069 [2024-07-22 18:38:09.888833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:40656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.069 [2024-07-22 18:38:09.888853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.069 [2024-07-22 18:38:09.888897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:40688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.069 [2024-07-22 18:38:09.888936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:02.069 [2024-07-22 18:38:09.888979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:40720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.069 [2024-07-22 18:38:09.889001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:02.069 [2024-07-22 18:38:09.889032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:40352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.069 [2024-07-22 18:38:09.889053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:31:02.069 [2024-07-22 18:38:09.889083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:40752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.069 [2024-07-22 18:38:09.889103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:02.069 [2024-07-22 18:38:09.889134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:40192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.069 [2024-07-22 18:38:09.889155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:02.069 [2024-07-22 18:38:09.889185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:40256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.069 [2024-07-22 18:38:09.889205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:02.069 [2024-07-22 18:38:09.889236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:39768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.069 [2024-07-22 18:38:09.889258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:02.069 [2024-07-22 18:38:09.891205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:40328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.069 [2024-07-22 18:38:09.891243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:02.069 [2024-07-22 18:38:09.891286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:39880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.069 [2024-07-22 18:38:09.891310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:02.069 [2024-07-22 18:38:09.891343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:39824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.069 [2024-07-22 18:38:09.891365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:02.069 [2024-07-22 18:38:09.891397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:40392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.069 [2024-07-22 18:38:09.891418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:02.069 [2024-07-22 18:38:09.891449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:39912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.069 [2024-07-22 18:38:09.891470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:02.069 [2024-07-22 18:38:09.891500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:39968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.069 [2024-07-22 18:38:09.891521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:02.069 [2024-07-22 18:38:09.891552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:40880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.069 [2024-07-22 18:38:09.891587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:02.069 [2024-07-22 18:38:09.891636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:40896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.069 [2024-07-22 18:38:09.891673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:02.069 [2024-07-22 18:38:09.891704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:40912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.069 [2024-07-22 18:38:09.891725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:02.069 [2024-07-22 18:38:09.891756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:40600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.069 [2024-07-22 18:38:09.891795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:02.069 [2024-07-22 18:38:09.891827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:40632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.069 [2024-07-22 18:38:09.891848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:02.069 [2024-07-22 18:38:09.891898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:40664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.069 [2024-07-22 18:38:09.891923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:02.069 [2024-07-22 18:38:09.891954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:40696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.069 [2024-07-22 18:38:09.891975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:02.069 [2024-07-22 18:38:09.892005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:40728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.069 [2024-07-22 18:38:09.892026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:02.069 [2024-07-22 18:38:09.892057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:40760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.069 [2024-07-22 18:38:09.892078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:02.069 [2024-07-22 18:38:09.892108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:40208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.069 [2024-07-22 18:38:09.892129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:02.069 [2024-07-22 18:38:09.892160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:40272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.069 [2024-07-22 18:38:09.892180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:02.069 [2024-07-22 18:38:09.892213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:40416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.069 [2024-07-22 18:38:09.892234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:02.069 [2024-07-22 18:38:09.892264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:40784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.069 [2024-07-22 18:38:09.892298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:02.069 [2024-07-22 18:38:09.892331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:40816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.069 [2024-07-22 18:38:09.892354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:02.069 [2024-07-22 18:38:09.892385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:40848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.069 [2024-07-22 18:38:09.892406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:02.069 [2024-07-22 18:38:09.892436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:40024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.069 [2024-07-22 18:38:09.892456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:02.069 [2024-07-22 18:38:09.892495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:40152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.069 [2024-07-22 18:38:09.892516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:02.069 [2024-07-22 18:38:09.892548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:40504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.069 [2024-07-22 18:38:09.892570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:02.069 [2024-07-22 18:38:09.893901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:40568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.069 [2024-07-22 18:38:09.893953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:02.069 [2024-07-22 18:38:09.893995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:40280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.069 [2024-07-22 18:38:09.894020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:02.070 [2024-07-22 18:38:09.894073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:40064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.070 [2024-07-22 18:38:09.894097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:02.070 [2024-07-22 18:38:09.894128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:40464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.070 [2024-07-22 18:38:09.894148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:02.070 [2024-07-22 18:38:09.894179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:40528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.070 [2024-07-22 18:38:09.894199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:02.070 [2024-07-22 18:38:09.894229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:40584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.070 [2024-07-22 18:38:09.894259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:02.070 [2024-07-22 18:38:09.894290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:40656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.070 [2024-07-22 18:38:09.894311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:02.070 [2024-07-22 18:38:09.894371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:40720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.070 [2024-07-22 18:38:09.894393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:02.070 [2024-07-22 18:38:09.894424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:40752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.070 [2024-07-22 18:38:09.894443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:02.070 [2024-07-22 18:38:09.894471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:40256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.070 [2024-07-22 18:38:09.894491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:02.070 [2024-07-22 18:38:09.894536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:40312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.070 [2024-07-22 18:38:09.894556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:02.070 [2024-07-22 18:38:09.894602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:40376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.070 [2024-07-22 18:38:09.894622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:02.070 [2024-07-22 18:38:09.894652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:40440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.070 [2024-07-22 18:38:09.894672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:02.070 [2024-07-22 18:38:09.894703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:40936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.070 [2024-07-22 18:38:09.894724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:02.070 [2024-07-22 18:38:09.894753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:40952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.070 [2024-07-22 18:38:09.894774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:02.070 [2024-07-22 18:38:09.894812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:40968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.070 [2024-07-22 18:38:09.894833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:02.070 [2024-07-22 18:38:09.894863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:40984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.070 [2024-07-22 18:38:09.894900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:02.070 [2024-07-22 18:38:09.894935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:41000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.070 [2024-07-22 18:38:09.894956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:02.070 [2024-07-22 18:38:09.894986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:41016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.070 [2024-07-22 18:38:09.895007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:02.070 [2024-07-22 18:38:09.895051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:41032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.070 [2024-07-22 18:38:09.895074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:02.070 [2024-07-22 18:38:09.896191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:41048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.070 [2024-07-22 18:38:09.896229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:02.070 [2024-07-22 18:38:09.896269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:41064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.070 [2024-07-22 18:38:09.896293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:02.070 [2024-07-22 18:38:09.896325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:39880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.070 [2024-07-22 18:38:09.896346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:02.070 [2024-07-22 18:38:09.896376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:40392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.070 [2024-07-22 18:38:09.896397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:02.070 [2024-07-22 18:38:09.896429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:39968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.070 [2024-07-22 18:38:09.896450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:02.070 [2024-07-22 18:38:09.896480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:40896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.070 [2024-07-22 18:38:09.896500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:02.070 [2024-07-22 18:38:09.896531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:40600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.070 [2024-07-22 18:38:09.896553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:02.070 [2024-07-22 18:38:09.896583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:40664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.070 [2024-07-22 18:38:09.896603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:02.070 [2024-07-22 18:38:09.896634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:40728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.070 [2024-07-22 18:38:09.896657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:02.070 [2024-07-22 18:38:09.896687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:40208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.070 [2024-07-22 18:38:09.896708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:02.070 [2024-07-22 18:38:09.896738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:40416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.070 [2024-07-22 18:38:09.896759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:02.070 [2024-07-22 18:38:09.896804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:40816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.070 [2024-07-22 18:38:09.896828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:02.070 [2024-07-22 18:38:09.897052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:40024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.070 [2024-07-22 18:38:09.897077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:02.070 [2024-07-22 18:38:09.897109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:40504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.070 [2024-07-22 18:38:09.897131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:02.070 [2024-07-22 18:38:09.898088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:40792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.070 [2024-07-22 18:38:09.898147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:02.070 [2024-07-22 18:38:09.898202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:40824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.070 [2024-07-22 18:38:09.898237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:02.070 [2024-07-22 18:38:09.898272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:40856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.070 [2024-07-22 18:38:09.898294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:02.070 [2024-07-22 18:38:09.898325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:41080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.070 [2024-07-22 18:38:09.898346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:02.070 [2024-07-22 18:38:09.898376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:41096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.070 [2024-07-22 18:38:09.898397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:02.070 [2024-07-22 18:38:09.898427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:41112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.070 [2024-07-22 18:38:09.898448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:02.070 [2024-07-22 18:38:09.898478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:40488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.071 [2024-07-22 18:38:09.898499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:02.071 [2024-07-22 18:38:09.898529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:40552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.071 [2024-07-22 18:38:09.898551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:02.071 [2024-07-22 18:38:09.898581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:40280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.071 [2024-07-22 18:38:09.898602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:02.071 [2024-07-22 18:38:09.898633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:40464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.071 [2024-07-22 18:38:09.898668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:02.071 [2024-07-22 18:38:09.898702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:40584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.071 [2024-07-22 18:38:09.898725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:02.071 [2024-07-22 18:38:09.898755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:40720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.071 [2024-07-22 18:38:09.898776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:02.071 [2024-07-22 18:38:09.898807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:40256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.071 [2024-07-22 18:38:09.898827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:02.071 [2024-07-22 18:38:09.898879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:40376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.071 [2024-07-22 18:38:09.898901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:02.071 [2024-07-22 18:38:09.898933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:40936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.071 [2024-07-22 18:38:09.898954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:02.071 [2024-07-22 18:38:09.898984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:40968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.071 [2024-07-22 18:38:09.899022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:02.071 [2024-07-22 18:38:09.899054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:41000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.071 [2024-07-22 18:38:09.899075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:02.071 [2024-07-22 18:38:09.899107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:41032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.071 [2024-07-22 18:38:09.899128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:02.071 [2024-07-22 18:38:09.900001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:41136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.071 [2024-07-22 18:38:09.900048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:02.071 [2024-07-22 18:38:09.900091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:41152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.071 [2024-07-22 18:38:09.900116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:02.071 [2024-07-22 18:38:09.900148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:40608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.071 [2024-07-22 18:38:09.900169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:02.071 [2024-07-22 18:38:09.900200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:40672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.071 [2024-07-22 18:38:09.900249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:02.071 [2024-07-22 18:38:09.900282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:40736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.071 [2024-07-22 18:38:09.900304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:02.071 [2024-07-22 18:38:09.900333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:40224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.071 [2024-07-22 18:38:09.900354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:02.071 [2024-07-22 18:38:09.900383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:40296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.071 [2024-07-22 18:38:09.900403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:02.071 [2024-07-22 18:38:09.900433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:41064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.071 [2024-07-22 18:38:09.900453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:02.071 [2024-07-22 18:38:09.900483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:40392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.071 [2024-07-22 18:38:09.900503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:02.071 [2024-07-22 18:38:09.900549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:40896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.071 [2024-07-22 18:38:09.900569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:02.071 [2024-07-22 18:38:09.900600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:40664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.071 [2024-07-22 18:38:09.900620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:02.071 [2024-07-22 18:38:09.900651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:40208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.071 [2024-07-22 18:38:09.900672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:02.071 [2024-07-22 18:38:09.900702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:40816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.071 [2024-07-22 18:38:09.900723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:02.071 [2024-07-22 18:38:09.900754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:40504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.071 [2024-07-22 18:38:09.900776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:02.071 [2024-07-22 18:38:09.901716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:41168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.071 [2024-07-22 18:38:09.901762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:02.071 [2024-07-22 18:38:09.901805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:41184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.071 [2024-07-22 18:38:09.901844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:02.071 [2024-07-22 18:38:09.901895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:41200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.071 [2024-07-22 18:38:09.901919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:02.071 [2024-07-22 18:38:09.901950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:40888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.071 [2024-07-22 18:38:09.901971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:02.071 [2024-07-22 18:38:09.902005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:40920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.071 [2024-07-22 18:38:09.902036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:02.071 [2024-07-22 18:38:09.902071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:40824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.071 [2024-07-22 18:38:09.902092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:02.071 [2024-07-22 18:38:09.902124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:41080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.071 [2024-07-22 18:38:09.902144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:02.071 [2024-07-22 18:38:09.902175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:41112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.071 [2024-07-22 18:38:09.902195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:02.071 [2024-07-22 18:38:09.902225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:40552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.071 [2024-07-22 18:38:09.902245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:02.071 [2024-07-22 18:38:09.902275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:40464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.071 [2024-07-22 18:38:09.902295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:02.071 [2024-07-22 18:38:09.902325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:40720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.071 [2024-07-22 18:38:09.902346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:02.071 [2024-07-22 18:38:09.902377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:40376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.071 [2024-07-22 18:38:09.902397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:02.071 [2024-07-22 18:38:09.902427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:40968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.071 [2024-07-22 18:38:09.902448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:02.071 [2024-07-22 18:38:09.902480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:41032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.072 [2024-07-22 18:38:09.902504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:02.072 [2024-07-22 18:38:09.903194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:40832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.072 [2024-07-22 18:38:09.903230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:02.072 [2024-07-22 18:38:09.903283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:41208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.072 [2024-07-22 18:38:09.903306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:02.072 [2024-07-22 18:38:09.903340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:41224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.072 [2024-07-22 18:38:09.903361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:02.072 [2024-07-22 18:38:09.903391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:40536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.072 [2024-07-22 18:38:09.903412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:02.072 [2024-07-22 18:38:09.903442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:41152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.072 [2024-07-22 18:38:09.903463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:02.072 [2024-07-22 18:38:09.903493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:40672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.072 [2024-07-22 18:38:09.903514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:02.072 [2024-07-22 18:38:09.903544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:40224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.072 [2024-07-22 18:38:09.903565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:02.072 [2024-07-22 18:38:09.903596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:41064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.072 [2024-07-22 18:38:09.903616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:02.072 [2024-07-22 18:38:09.903646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:40896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.072 [2024-07-22 18:38:09.903667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:02.072 [2024-07-22 18:38:09.903697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:40208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.072 [2024-07-22 18:38:09.903718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:02.072 [2024-07-22 18:38:09.903750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:40504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.072 [2024-07-22 18:38:09.903771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:02.072 [2024-07-22 18:38:09.904465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:40688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.072 [2024-07-22 18:38:09.904510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:02.072 [2024-07-22 18:38:09.904553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:40928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.072 [2024-07-22 18:38:09.904591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:02.072 [2024-07-22 18:38:09.904626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:40960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.072 [2024-07-22 18:38:09.904648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:02.072 [2024-07-22 18:38:09.904678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:40992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.072 [2024-07-22 18:38:09.904699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:02.072 [2024-07-22 18:38:09.904729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:41024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.072 [2024-07-22 18:38:09.904750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:02.072 [2024-07-22 18:38:09.904780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:41056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.072 [2024-07-22 18:38:09.904801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.072 [2024-07-22 18:38:09.904846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:40328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.072 [2024-07-22 18:38:09.904870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:02.072 [2024-07-22 18:38:09.904902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:40912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.072 [2024-07-22 18:38:09.904923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:02.072 [2024-07-22 18:38:09.904964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:41184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.072 [2024-07-22 18:38:09.904984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:31:02.072 [2024-07-22 18:38:09.905015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:40888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.072 [2024-07-22 18:38:09.905035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:02.072 [2024-07-22 18:38:09.905065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:40824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.072 [2024-07-22 18:38:09.905085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:02.072 [2024-07-22 18:38:09.905116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:41112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.072 [2024-07-22 18:38:09.905136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:02.072 [2024-07-22 18:38:09.905175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:40464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.072 [2024-07-22 18:38:09.905195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:02.072 [2024-07-22 18:38:09.905224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:40376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.072 [2024-07-22 18:38:09.905255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:02.072 [2024-07-22 18:38:09.905288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:41032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.072 [2024-07-22 18:38:09.905310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:02.072 [2024-07-22 18:38:09.905341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:40848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.072 [2024-07-22 18:38:09.905361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:02.072 [2024-07-22 18:38:09.905391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:41104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.072 [2024-07-22 18:38:09.905411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:02.072 [2024-07-22 18:38:09.905442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:41208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.072 [2024-07-22 18:38:09.905462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:02.072 [2024-07-22 18:38:09.905492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:40536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.072 [2024-07-22 18:38:09.905513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:02.072 [2024-07-22 18:38:09.905543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:40672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.072 [2024-07-22 18:38:09.905564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:02.072 [2024-07-22 18:38:09.905594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:41064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.072 [2024-07-22 18:38:09.905614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:02.073 [2024-07-22 18:38:09.905646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:40208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.073 [2024-07-22 18:38:09.905667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:02.073 [2024-07-22 18:38:09.907075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:40568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.073 [2024-07-22 18:38:09.907142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:02.073 [2024-07-22 18:38:09.907186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:40752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.073 [2024-07-22 18:38:09.907211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:02.073 [2024-07-22 18:38:09.907243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:40984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.073 [2024-07-22 18:38:09.907265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:02.073 [2024-07-22 18:38:09.907295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:41128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.073 [2024-07-22 18:38:09.907316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:02.073 [2024-07-22 18:38:09.907362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:41160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.073 [2024-07-22 18:38:09.907386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:02.073 [2024-07-22 18:38:09.907416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:40928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.073 [2024-07-22 18:38:09.907437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:02.073 [2024-07-22 18:38:09.907467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:40992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.073 [2024-07-22 18:38:09.907487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:02.073 [2024-07-22 18:38:09.907518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:41056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.073 [2024-07-22 18:38:09.907539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:02.073 [2024-07-22 18:38:09.907568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:40912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.073 [2024-07-22 18:38:09.907589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:02.073 [2024-07-22 18:38:09.907619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:40888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.073 [2024-07-22 18:38:09.907640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:02.073 [2024-07-22 18:38:09.907671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:41112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.073 [2024-07-22 18:38:09.907691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:02.073 [2024-07-22 18:38:09.907722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:40376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.073 [2024-07-22 18:38:09.907742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:02.073 [2024-07-22 18:38:09.907773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:40848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.073 [2024-07-22 18:38:09.907794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:02.073 [2024-07-22 18:38:09.907823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:41208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.073 [2024-07-22 18:38:09.907869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:02.073 [2024-07-22 18:38:09.907903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:40672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.073 [2024-07-22 18:38:09.907924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:02.073 [2024-07-22 18:38:09.907955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:40208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.073 [2024-07-22 18:38:09.907976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:02.073 [2024-07-22 18:38:09.908017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:41176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.073 [2024-07-22 18:38:09.908040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:02.073 [2024-07-22 18:38:09.908071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:41096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.073 [2024-07-22 18:38:09.908093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:02.073 [2024-07-22 18:38:09.908396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:40936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.073 [2024-07-22 18:38:09.908428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:02.073 [2024-07-22 18:38:09.908466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:41216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.073 [2024-07-22 18:38:09.908488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:02.073 [2024-07-22 18:38:09.908521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:40392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.073 [2024-07-22 18:38:09.908543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:02.073 [2024-07-22 18:38:09.908887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:40752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.073 [2024-07-22 18:38:09.908920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:02.073 [2024-07-22 18:38:09.908957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:41128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.073 [2024-07-22 18:38:09.908980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:02.073 [2024-07-22 18:38:09.909012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:40928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.073 [2024-07-22 18:38:09.909033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:02.073 [2024-07-22 18:38:09.909066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:41056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.073 [2024-07-22 18:38:09.909086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:02.073 [2024-07-22 18:38:09.909117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:40888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.073 [2024-07-22 18:38:09.909138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:02.073 [2024-07-22 18:38:09.909169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:40376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.073 [2024-07-22 18:38:09.909189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:02.073 [2024-07-22 18:38:09.909220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:41208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.073 [2024-07-22 18:38:09.909240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:02.073 [2024-07-22 18:38:09.909287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:40208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.073 [2024-07-22 18:38:09.909311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:02.073 [2024-07-22 18:38:09.909342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:41096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.073 [2024-07-22 18:38:09.909363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:02.073 [2024-07-22 18:38:09.909567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:41216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.073 [2024-07-22 18:38:09.909599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:02.073 [2024-07-22 18:38:09.914453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:41128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.073 [2024-07-22 18:38:09.914505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:02.073 [2024-07-22 18:38:09.914551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:41056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.073 [2024-07-22 18:38:09.914575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:02.073 [2024-07-22 18:38:09.914608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:40376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.073 [2024-07-22 18:38:09.914630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:02.073 [2024-07-22 18:38:09.914662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:40208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.073 [2024-07-22 18:38:09.914683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:02.073 [2024-07-22 18:38:09.916045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:41240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.073 [2024-07-22 18:38:09.916093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:02.073 [2024-07-22 18:38:09.916137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:41256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.074 [2024-07-22 18:38:09.916161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:02.074 [2024-07-22 18:38:09.916193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:41272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.074 [2024-07-22 18:38:09.916214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:02.074 [2024-07-22 18:38:09.916245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:41288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.074 [2024-07-22 18:38:09.916265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:02.074 [2024-07-22 18:38:09.916295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:41304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.074 [2024-07-22 18:38:09.916317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:02.074 [2024-07-22 18:38:09.916347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:41320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.074 [2024-07-22 18:38:09.916383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:02.074 [2024-07-22 18:38:09.916417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:41336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.074 [2024-07-22 18:38:09.916440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:02.074 [2024-07-22 18:38:09.916472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:41352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.074 [2024-07-22 18:38:09.916493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:02.074 [2024-07-22 18:38:09.916528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:41368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.074 [2024-07-22 18:38:09.916548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:02.074 [2024-07-22 18:38:09.916588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:41384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.074 [2024-07-22 18:38:09.916608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:02.074 [2024-07-22 18:38:09.916638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:41400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.074 [2024-07-22 18:38:09.916659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:02.074 [2024-07-22 18:38:09.916689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:41416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.074 [2024-07-22 18:38:09.916709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:02.074 [2024-07-22 18:38:09.916739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:41432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.074 [2024-07-22 18:38:09.916760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:02.074 [2024-07-22 18:38:09.916790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:41448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.074 [2024-07-22 18:38:09.916811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:02.074 [2024-07-22 18:38:09.916857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:41464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.074 [2024-07-22 18:38:09.916881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:02.074 [2024-07-22 18:38:09.916913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:41480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.074 [2024-07-22 18:38:09.916934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:02.074 [2024-07-22 18:38:09.916964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:41496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.074 [2024-07-22 18:38:09.916985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:02.074 [2024-07-22 18:38:09.917015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:41512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.074 [2024-07-22 18:38:09.917046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:02.074 [2024-07-22 18:38:09.917080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:41528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.074 [2024-07-22 18:38:09.917112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:02.074 [2024-07-22 18:38:09.917143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:41544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.074 [2024-07-22 18:38:09.917164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:02.074 [2024-07-22 18:38:09.917195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:41560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.074 [2024-07-22 18:38:09.917216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:02.074 [2024-07-22 18:38:09.917246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:41576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.074 [2024-07-22 18:38:09.917267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:02.074 [2024-07-22 18:38:09.917297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:41592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.074 [2024-07-22 18:38:09.917318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:02.074 [2024-07-22 18:38:09.917348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:41608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.074 [2024-07-22 18:38:09.917369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:02.074 [2024-07-22 18:38:09.917400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:41624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.074 [2024-07-22 18:38:09.917420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:02.074 [2024-07-22 18:38:09.917450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:41640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.074 [2024-07-22 18:38:09.917471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:02.074 [2024-07-22 18:38:09.917501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:41656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.074 [2024-07-22 18:38:09.917522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:02.074 [2024-07-22 18:38:09.917553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:41672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.074 [2024-07-22 18:38:09.917574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:02.074 [2024-07-22 18:38:09.917605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:41688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.074 [2024-07-22 18:38:09.917626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:02.074 [2024-07-22 18:38:09.917656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:41704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.074 [2024-07-22 18:38:09.917694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:02.074 [2024-07-22 18:38:09.917737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:41720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.074 [2024-07-22 18:38:09.917760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:02.074 [2024-07-22 18:38:09.917790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:41736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.074 [2024-07-22 18:38:09.917811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:02.074 [2024-07-22 18:38:09.917855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:41752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.074 [2024-07-22 18:38:09.917880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:02.074 [2024-07-22 18:38:09.917912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:41768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.074 [2024-07-22 18:38:09.917934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:02.074 [2024-07-22 18:38:09.918539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:41784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.074 [2024-07-22 18:38:09.918579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:02.074 [2024-07-22 18:38:09.918620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:41200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.074 [2024-07-22 18:38:09.918645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:02.074 [2024-07-22 18:38:09.918676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:40720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.074 [2024-07-22 18:38:09.918696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:02.074 [2024-07-22 18:38:09.918728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:41224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.074 [2024-07-22 18:38:09.918749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:02.074 [2024-07-22 18:38:09.918780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:40896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.074 [2024-07-22 18:38:09.918800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:02.074 [2024-07-22 18:38:09.918845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:41216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.075 [2024-07-22 18:38:09.918871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:02.075 [2024-07-22 18:38:09.918903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:41056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.075 [2024-07-22 18:38:09.918925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:02.075 [2024-07-22 18:38:09.918956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:40208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.075 [2024-07-22 18:38:09.918976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:02.075 [2024-07-22 18:38:09.919021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:41032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.075 [2024-07-22 18:38:09.919044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:02.075 [2024-07-22 18:38:09.919077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:41112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.075 [2024-07-22 18:38:09.919098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:02.075 [2024-07-22 18:38:09.923578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:41256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.075 [2024-07-22 18:38:09.923629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:02.075 [2024-07-22 18:38:09.923675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:41288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.075 [2024-07-22 18:38:09.923699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:02.075 [2024-07-22 18:38:09.923732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:41320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.075 [2024-07-22 18:38:09.923753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:02.075 [2024-07-22 18:38:09.923784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:41352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.075 [2024-07-22 18:38:09.923805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:02.075 [2024-07-22 18:38:09.923852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:41384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.075 [2024-07-22 18:38:09.923877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:02.075 [2024-07-22 18:38:09.923908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:41416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.075 [2024-07-22 18:38:09.923936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:02.075 [2024-07-22 18:38:09.923967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:41448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.075 [2024-07-22 18:38:09.923988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:02.075 [2024-07-22 18:38:09.924018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:41480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.075 [2024-07-22 18:38:09.924039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:02.075 [2024-07-22 18:38:09.924069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:41512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.075 [2024-07-22 18:38:09.924090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:02.075 [2024-07-22 18:38:09.924119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:41544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.075 [2024-07-22 18:38:09.924140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:02.075 [2024-07-22 18:38:09.924170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:41576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.075 [2024-07-22 18:38:09.924204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:02.075 [2024-07-22 18:38:09.924237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:41608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.075 [2024-07-22 18:38:09.924259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:02.075 [2024-07-22 18:38:09.924290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:41640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.075 [2024-07-22 18:38:09.924311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:02.075 [2024-07-22 18:38:09.924341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:41672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.075 [2024-07-22 18:38:09.924361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:02.075 [2024-07-22 18:38:09.924391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:41704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.075 [2024-07-22 18:38:09.924412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:02.075 [2024-07-22 18:38:09.924444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:41736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.075 [2024-07-22 18:38:09.924464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:02.075 [2024-07-22 18:38:09.924495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:41768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.075 [2024-07-22 18:38:09.924515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:02.075 [2024-07-22 18:38:09.924556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:41248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.075 [2024-07-22 18:38:09.924576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:02.075 [2024-07-22 18:38:09.924607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:41280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.075 [2024-07-22 18:38:09.924628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:02.075 [2024-07-22 18:38:09.924658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:41312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.075 [2024-07-22 18:38:09.924678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:02.075 [2024-07-22 18:38:09.924708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:41344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.075 [2024-07-22 18:38:09.924728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:02.075 [2024-07-22 18:38:09.924758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:41376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.075 [2024-07-22 18:38:09.924779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:02.075 [2024-07-22 18:38:09.924809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:41408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.075 [2024-07-22 18:38:09.924852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:02.075 [2024-07-22 18:38:09.924889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:41440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.075 [2024-07-22 18:38:09.924911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:02.075 [2024-07-22 18:38:09.924942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:41472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.075 [2024-07-22 18:38:09.924962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:02.075 [2024-07-22 18:38:09.924993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:41504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.075 [2024-07-22 18:38:09.925014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:02.075 [2024-07-22 18:38:09.925043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:41536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.075 [2024-07-22 18:38:09.925064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:02.075 [2024-07-22 18:38:09.925095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:41568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.075 [2024-07-22 18:38:09.925115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:02.075 [2024-07-22 18:38:09.925145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:41600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.075 [2024-07-22 18:38:09.925166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:02.075 [2024-07-22 18:38:09.925196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:41632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.075 [2024-07-22 18:38:09.925217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:02.075 [2024-07-22 18:38:09.925247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:41664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.075 [2024-07-22 18:38:09.925268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:02.075 [2024-07-22 18:38:09.925310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:41696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.075 [2024-07-22 18:38:09.925331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:02.075 [2024-07-22 18:38:09.925362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:41728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.075 [2024-07-22 18:38:09.925382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.075 [2024-07-22 18:38:09.925413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:41760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.075 [2024-07-22 18:38:09.925434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:02.076 [2024-07-22 18:38:09.925464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:41784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.076 [2024-07-22 18:38:09.925484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:02.076 [2024-07-22 18:38:09.925528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:40720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.076 [2024-07-22 18:38:09.925552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:31:02.076 [2024-07-22 18:38:09.925582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:40896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.076 [2024-07-22 18:38:09.925603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:02.076 [2024-07-22 18:38:09.925633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:41056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.076 [2024-07-22 18:38:09.925653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:02.076 [2024-07-22 18:38:09.925685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:41032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.076 [2024-07-22 18:38:09.925705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:02.076 [2024-07-22 18:38:09.928270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:41792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.076 [2024-07-22 18:38:09.928319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:02.076 [2024-07-22 18:38:09.928364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:41808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.076 [2024-07-22 18:38:09.928389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:02.076 [2024-07-22 18:38:09.928420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:41824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.076 [2024-07-22 18:38:09.928442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:02.076 [2024-07-22 18:38:09.928472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:41840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.076 [2024-07-22 18:38:09.928493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:02.076 [2024-07-22 18:38:09.928525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:41856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.076 [2024-07-22 18:38:09.928546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:02.076 [2024-07-22 18:38:09.928576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:41872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.076 [2024-07-22 18:38:09.928596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:02.076 [2024-07-22 18:38:09.928626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:41888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.076 [2024-07-22 18:38:09.928647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:02.076 [2024-07-22 18:38:09.928678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:41904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.076 [2024-07-22 18:38:09.928698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:02.076 [2024-07-22 18:38:09.928744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:41920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.076 [2024-07-22 18:38:09.928766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:02.076 [2024-07-22 18:38:09.928798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:41936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.076 [2024-07-22 18:38:09.928818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:02.076 [2024-07-22 18:38:09.928864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:41952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.076 [2024-07-22 18:38:09.928904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:02.076 [2024-07-22 18:38:09.928936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:41968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.076 [2024-07-22 18:38:09.928956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:02.076 [2024-07-22 18:38:09.928986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:41984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.076 [2024-07-22 18:38:09.929006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:02.076 [2024-07-22 18:38:09.929037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:42000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.076 [2024-07-22 18:38:09.929057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:02.076 [2024-07-22 18:38:09.929087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:42016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.076 [2024-07-22 18:38:09.929108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:02.076 [2024-07-22 18:38:09.929138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:42032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.076 [2024-07-22 18:38:09.929159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:02.076 [2024-07-22 18:38:09.929188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:42048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.076 [2024-07-22 18:38:09.929209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:02.076 [2024-07-22 18:38:09.929239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:42064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.076 [2024-07-22 18:38:09.929259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:02.076 [2024-07-22 18:38:09.929289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:42080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.076 [2024-07-22 18:38:09.929310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:02.076 [2024-07-22 18:38:09.929340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:42096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.076 [2024-07-22 18:38:09.929367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:02.076 [2024-07-22 18:38:09.929397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:42112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.076 [2024-07-22 18:38:09.929428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:02.076 [2024-07-22 18:38:09.929460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:42128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.076 [2024-07-22 18:38:09.929481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:02.076 [2024-07-22 18:38:09.929511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:42144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.076 [2024-07-22 18:38:09.929532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:02.076 [2024-07-22 18:38:09.929562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:42160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.076 [2024-07-22 18:38:09.929582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:02.076 [2024-07-22 18:38:09.929612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:41272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.076 [2024-07-22 18:38:09.929633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:02.076 [2024-07-22 18:38:09.929663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:41336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.076 [2024-07-22 18:38:09.929684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:02.076 [2024-07-22 18:38:09.929714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:41400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.076 [2024-07-22 18:38:09.929734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:02.076 [2024-07-22 18:38:09.929765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:41464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.076 [2024-07-22 18:38:09.929785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:02.076 [2024-07-22 18:38:09.929814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:41528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.076 [2024-07-22 18:38:09.929848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:02.076 [2024-07-22 18:38:09.929882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:41592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.076 [2024-07-22 18:38:09.929903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:02.076 [2024-07-22 18:38:09.929933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:41656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.076 [2024-07-22 18:38:09.929953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:02.076 [2024-07-22 18:38:09.929984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:41720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.076 [2024-07-22 18:38:09.930004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:02.076 [2024-07-22 18:38:09.930047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:42168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.077 [2024-07-22 18:38:09.930081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:02.077 [2024-07-22 18:38:09.930115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:41288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.077 [2024-07-22 18:38:09.930136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:02.077 [2024-07-22 18:38:09.930167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:41352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.077 [2024-07-22 18:38:09.930187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:02.077 [2024-07-22 18:38:09.930218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:41416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.077 [2024-07-22 18:38:09.930238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:02.077 [2024-07-22 18:38:09.930268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:41480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.077 [2024-07-22 18:38:09.930289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:02.077 [2024-07-22 18:38:09.930318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:41544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.077 [2024-07-22 18:38:09.930339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:02.077 [2024-07-22 18:38:09.930369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:41608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.077 [2024-07-22 18:38:09.930390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:02.077 [2024-07-22 18:38:09.931064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:41672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.077 [2024-07-22 18:38:09.931117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:02.077 [2024-07-22 18:38:09.931161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:41736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.077 [2024-07-22 18:38:09.931185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:02.077 [2024-07-22 18:38:09.931217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:41248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.077 [2024-07-22 18:38:09.931238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:02.077 [2024-07-22 18:38:09.931269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:41312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.077 [2024-07-22 18:38:09.931290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:02.077 [2024-07-22 18:38:09.931321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:41376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.077 [2024-07-22 18:38:09.931342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:02.077 [2024-07-22 18:38:09.931372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:41440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.077 [2024-07-22 18:38:09.931393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:02.077 [2024-07-22 18:38:09.931437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:41504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.077 [2024-07-22 18:38:09.931461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:02.077 [2024-07-22 18:38:09.931491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:41568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.077 [2024-07-22 18:38:09.931512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:02.077 [2024-07-22 18:38:09.931542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:41632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.077 [2024-07-22 18:38:09.931562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:02.077 [2024-07-22 18:38:09.931592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:41696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.077 [2024-07-22 18:38:09.931613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:02.077 [2024-07-22 18:38:09.931643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:41760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.077 [2024-07-22 18:38:09.931663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:02.077 [2024-07-22 18:38:09.931696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:40720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.077 [2024-07-22 18:38:09.931716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:02.077 [2024-07-22 18:38:09.931747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:41056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.077 [2024-07-22 18:38:09.931767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:02.077 [2024-07-22 18:38:09.931797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:42184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.077 [2024-07-22 18:38:09.931818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:02.077 [2024-07-22 18:38:09.931865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:42200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.077 [2024-07-22 18:38:09.931889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:02.077 [2024-07-22 18:38:09.931919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:42216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.077 [2024-07-22 18:38:09.931939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:02.077 [2024-07-22 18:38:09.931969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:42232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.077 [2024-07-22 18:38:09.931988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:02.077 [2024-07-22 18:38:09.932018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:42248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.077 [2024-07-22 18:38:09.932039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:02.077 [2024-07-22 18:38:09.932081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:42264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.077 [2024-07-22 18:38:09.932104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:02.077 [2024-07-22 18:38:09.932134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:42280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.077 [2024-07-22 18:38:09.932154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:02.077 [2024-07-22 18:38:09.932184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:42296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.077 [2024-07-22 18:38:09.932204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:02.077 [2024-07-22 18:38:09.932234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:42312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.077 [2024-07-22 18:38:09.932253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:02.077 [2024-07-22 18:38:09.932283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:42328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.077 [2024-07-22 18:38:09.932307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:02.077 [2024-07-22 18:38:09.932337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:42344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.077 [2024-07-22 18:38:09.932357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:02.077 [2024-07-22 18:38:09.932387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:42360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.077 [2024-07-22 18:38:09.932408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:02.077 [2024-07-22 18:38:09.934060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:41808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.077 [2024-07-22 18:38:09.934106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:02.077 [2024-07-22 18:38:09.934150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:41840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.078 [2024-07-22 18:38:09.934174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:02.078 [2024-07-22 18:38:09.934205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:41872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.078 [2024-07-22 18:38:09.934227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:02.078 [2024-07-22 18:38:09.934257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:41904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.078 [2024-07-22 18:38:09.934279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:02.078 [2024-07-22 18:38:09.934309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:41936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.078 [2024-07-22 18:38:09.934330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:02.078 [2024-07-22 18:38:09.934360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:41968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.078 [2024-07-22 18:38:09.934401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:02.078 [2024-07-22 18:38:09.934434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:42000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.078 [2024-07-22 18:38:09.934457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:02.078 [2024-07-22 18:38:09.934487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:42032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.078 [2024-07-22 18:38:09.934508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:02.078 [2024-07-22 18:38:09.934539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:42064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.078 [2024-07-22 18:38:09.934559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:02.078 [2024-07-22 18:38:09.934589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:42096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.078 [2024-07-22 18:38:09.934610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:02.078 [2024-07-22 18:38:09.934640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:42128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.078 [2024-07-22 18:38:09.934677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:02.078 [2024-07-22 18:38:09.934709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:42160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.078 [2024-07-22 18:38:09.934731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:02.078 [2024-07-22 18:38:09.934761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:41336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.078 [2024-07-22 18:38:09.934782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:02.078 [2024-07-22 18:38:09.934813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:41464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.078 [2024-07-22 18:38:09.934849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:02.078 [2024-07-22 18:38:09.934885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:41592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.078 [2024-07-22 18:38:09.934907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:02.078 [2024-07-22 18:38:09.934937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:41720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.078 [2024-07-22 18:38:09.934958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:02.078 [2024-07-22 18:38:09.934988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:41288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.078 [2024-07-22 18:38:09.935009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:02.078 [2024-07-22 18:38:09.935039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:41416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.078 [2024-07-22 18:38:09.935072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:02.078 [2024-07-22 18:38:09.935106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:41544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.078 [2024-07-22 18:38:09.935128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:02.078 [2024-07-22 18:38:09.937563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:41816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.078 [2024-07-22 18:38:09.937610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:02.078 [2024-07-22 18:38:09.937664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:41848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.078 [2024-07-22 18:38:09.937692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:02.078 [2024-07-22 18:38:09.937724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:41880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.078 [2024-07-22 18:38:09.937746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:02.078 [2024-07-22 18:38:09.937775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:41912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.078 [2024-07-22 18:38:09.937796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:02.078 [2024-07-22 18:38:09.937825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:41944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.078 [2024-07-22 18:38:09.937866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:02.078 [2024-07-22 18:38:09.937899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:41976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.078 [2024-07-22 18:38:09.937921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:02.078 [2024-07-22 18:38:09.937951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:42008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.078 [2024-07-22 18:38:09.937972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:02.078 [2024-07-22 18:38:09.938002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:42040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.078 [2024-07-22 18:38:09.938035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:02.078 [2024-07-22 18:38:09.938069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:42072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.078 [2024-07-22 18:38:09.938091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:02.078 [2024-07-22 18:38:09.938130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:42104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.078 [2024-07-22 18:38:09.938151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:02.078 [2024-07-22 18:38:09.938182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:42136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.078 [2024-07-22 18:38:09.938202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:02.078 [2024-07-22 18:38:09.938248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:41672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.078 [2024-07-22 18:38:09.938271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:02.078 [2024-07-22 18:38:09.938302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:41248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.078 [2024-07-22 18:38:09.938323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:02.078 [2024-07-22 18:38:09.938353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:41376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.078 [2024-07-22 18:38:09.938374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:02.078 [2024-07-22 18:38:09.938403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:41504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.078 [2024-07-22 18:38:09.938424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:02.078 [2024-07-22 18:38:09.938454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:41632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.078 [2024-07-22 18:38:09.938475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:02.078 [2024-07-22 18:38:09.938504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:41760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.078 [2024-07-22 18:38:09.938533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:02.078 [2024-07-22 18:38:09.938563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:41056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.078 [2024-07-22 18:38:09.938584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:02.078 [2024-07-22 18:38:09.938615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:42200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.078 [2024-07-22 18:38:09.938636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:02.078 [2024-07-22 18:38:09.938666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:42232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.078 [2024-07-22 18:38:09.938686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:02.078 [2024-07-22 18:38:09.938716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:42264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.078 [2024-07-22 18:38:09.938736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:02.079 [2024-07-22 18:38:09.938767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:42296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.079 [2024-07-22 18:38:09.938787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:02.079 [2024-07-22 18:38:09.938817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:42328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.079 [2024-07-22 18:38:09.938854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:02.079 [2024-07-22 18:38:09.938900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:42360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.079 [2024-07-22 18:38:09.938924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:02.079 [2024-07-22 18:38:09.938954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:41256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.079 [2024-07-22 18:38:09.938974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:02.079 [2024-07-22 18:38:09.939004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:41384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.079 [2024-07-22 18:38:09.939025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:02.079 [2024-07-22 18:38:09.939055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:41512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.079 [2024-07-22 18:38:09.939075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:02.079 [2024-07-22 18:38:09.939105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:41640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.079 [2024-07-22 18:38:09.939126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:02.079 [2024-07-22 18:38:09.939155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:41768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.079 [2024-07-22 18:38:09.939176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:02.079 [2024-07-22 18:38:09.939206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:41840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.079 [2024-07-22 18:38:09.939227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:02.079 [2024-07-22 18:38:09.939257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:41904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.079 [2024-07-22 18:38:09.939278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:02.079 [2024-07-22 18:38:09.939307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:41968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.079 [2024-07-22 18:38:09.939328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:02.079 [2024-07-22 18:38:09.939357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:42032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.079 [2024-07-22 18:38:09.939377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:02.079 [2024-07-22 18:38:09.939407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:42096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.079 [2024-07-22 18:38:09.939427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:02.079 [2024-07-22 18:38:09.939458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:42160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.079 [2024-07-22 18:38:09.939478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:02.079 [2024-07-22 18:38:09.939518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:41464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.079 [2024-07-22 18:38:09.939541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:02.079 [2024-07-22 18:38:09.939572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:41720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.079 [2024-07-22 18:38:09.939593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:02.079 [2024-07-22 18:38:09.939625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:41416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.079 [2024-07-22 18:38:09.939647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:02.079 [2024-07-22 18:38:09.942745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:41784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.079 [2024-07-22 18:38:09.942814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.079 [2024-07-22 18:38:09.942903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:42208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.079 [2024-07-22 18:38:09.942935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:02.079 [2024-07-22 18:38:09.942969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:42240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.079 [2024-07-22 18:38:09.942991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:02.079 [2024-07-22 18:38:09.943023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:42272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.079 [2024-07-22 18:38:09.943045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:31:02.079 [2024-07-22 18:38:09.943076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:42304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.079 [2024-07-22 18:38:09.943097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:02.079 [2024-07-22 18:38:09.943128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:42336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.079 [2024-07-22 18:38:09.943148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:02.079 [2024-07-22 18:38:09.943179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:41792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.079 [2024-07-22 18:38:09.943200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:02.079 [2024-07-22 18:38:09.943231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:41856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.079 [2024-07-22 18:38:09.943252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:02.079 [2024-07-22 18:38:09.943282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:41920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.079 [2024-07-22 18:38:09.943303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:02.079 [2024-07-22 18:38:09.943334] nvme_qpair.c: 243:nvme_io_qpair_print_c 18:38:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:02.079 ommand: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:41984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.079 [2024-07-22 18:38:09.943377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:02.079 [2024-07-22 18:38:09.943412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:42048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.079 [2024-07-22 18:38:09.943435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:02.079 [2024-07-22 18:38:09.943465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:42112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.079 [2024-07-22 18:38:09.943486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:02.079 [2024-07-22 18:38:09.943517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:42168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.079 [2024-07-22 18:38:09.943537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:02.079 [2024-07-22 18:38:09.943568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:41480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.079 [2024-07-22 18:38:09.943588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:02.079 [2024-07-22 18:38:09.943619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:42368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.079 [2024-07-22 18:38:09.943640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:02.079 [2024-07-22 18:38:09.943671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:42384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.079 [2024-07-22 18:38:09.943691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:02.079 [2024-07-22 18:38:09.943721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:42400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.079 [2024-07-22 18:38:09.943743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:02.079 [2024-07-22 18:38:09.943773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:42416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.079 [2024-07-22 18:38:09.943812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:02.079 [2024-07-22 18:38:09.943860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:42432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.079 [2024-07-22 18:38:09.943884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:02.079 [2024-07-22 18:38:09.943915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:42448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.079 [2024-07-22 18:38:09.943937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:02.079 [2024-07-22 18:38:09.943966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:42464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.079 [2024-07-22 18:38:09.943987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:02.079 [2024-07-22 18:38:09.944016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:42480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.080 [2024-07-22 18:38:09.944048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:02.080 [2024-07-22 18:38:09.944081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:42496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.080 [2024-07-22 18:38:09.944102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:02.080 [2024-07-22 18:38:09.944132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:42512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.080 [2024-07-22 18:38:09.944153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:02.080 [2024-07-22 18:38:09.944184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:42528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.080 [2024-07-22 18:38:09.944204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:02.080 [2024-07-22 18:38:09.944236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:42544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.080 [2024-07-22 18:38:09.944257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:02.080 Received shutdown signal, test time was about 34.808206 seconds 00:31:02.080 00:31:02.080 Latency(us) 00:31:02.080 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:02.080 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:31:02.080 Verification LBA range: start 0x0 length 0x4000 00:31:02.080 Nvme0n1 : 34.81 6285.65 24.55 0.00 0.00 20327.93 191.77 4026531.84 00:31:02.080 =================================================================================================================== 00:31:02.080 Total : 6285.65 24.55 0.00 0.00 20327.93 191.77 4026531.84 00:31:02.338 18:38:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:31:02.338 18:38:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:31:02.338 18:38:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:31:02.338 18:38:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:02.338 18:38:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:31:02.596 18:38:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:02.596 18:38:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:31:02.596 18:38:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:02.596 18:38:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:02.596 rmmod nvme_tcp 00:31:02.596 rmmod nvme_fabrics 00:31:02.596 rmmod nvme_keyring 00:31:02.596 18:38:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:02.596 18:38:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:31:02.596 18:38:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:31:02.596 18:38:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 100359 ']' 00:31:02.596 18:38:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 100359 00:31:02.596 18:38:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 100359 ']' 00:31:02.596 18:38:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 100359 00:31:02.596 18:38:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:31:02.596 18:38:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:02.596 18:38:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 100359 00:31:02.596 killing process with pid 100359 00:31:02.596 18:38:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:02.597 18:38:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:02.597 18:38:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 100359' 00:31:02.597 18:38:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 100359 00:31:02.597 18:38:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 100359 00:31:03.971 18:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:03.971 18:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:03.971 18:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:03.971 18:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:03.971 18:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:03.971 18:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:03.971 18:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:03.971 18:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:03.971 18:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:31:03.971 00:31:03.971 real 0m43.534s 00:31:03.971 user 2m20.186s 00:31:03.971 sys 0m10.039s 00:31:03.971 18:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:03.971 18:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:03.971 ************************************ 00:31:03.971 END TEST nvmf_host_multipath_status 00:31:03.971 ************************************ 00:31:04.312 18:38:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:31:04.312 18:38:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:31:04.312 18:38:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:31:04.312 18:38:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:04.312 18:38:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:04.312 ************************************ 00:31:04.312 START TEST nvmf_discovery_remove_ifc 00:31:04.312 ************************************ 00:31:04.312 18:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:31:04.312 * Looking for test storage... 00:31:04.312 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:31:04.312 18:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:31:04.312 18:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:31:04.312 18:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:04.312 18:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:04.312 18:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:04.312 18:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:04.312 18:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:04.312 18:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:04.312 18:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:04.312 18:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:04.312 18:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:04.312 18:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:04.312 18:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:31:04.312 18:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=0b8484e2-e129-4a11-8748-0b3c728771da 00:31:04.312 18:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:04.312 18:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:04.312 18:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:31:04.312 18:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:04.312 18:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:31:04.312 18:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:04.312 18:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:04.312 18:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:04.312 18:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:04.312 18:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:04.312 18:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:04.312 18:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:31:04.312 18:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:04.312 18:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:31:04.312 18:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:04.312 18:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:04.312 18:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:04.312 18:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:04.312 18:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:04.312 18:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:04.312 18:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:04.312 18:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:04.312 18:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:31:04.312 18:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:31:04.312 18:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:31:04.312 18:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:31:04.312 18:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:31:04.312 18:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:31:04.312 18:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:31:04.312 18:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:04.312 18:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:04.312 18:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:04.312 18:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:04.312 18:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:04.312 18:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:04.312 18:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:04.312 18:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:04.312 18:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:31:04.312 18:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:31:04.312 18:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:31:04.312 18:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:31:04.313 18:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:31:04.313 18:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # nvmf_veth_init 00:31:04.313 18:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:04.313 18:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:04.313 18:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:31:04.313 18:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:31:04.313 18:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:31:04.313 18:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:31:04.313 18:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:31:04.313 18:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:04.313 18:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:31:04.313 18:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:31:04.313 18:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:31:04.313 18:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:31:04.313 18:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:31:04.313 18:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:31:04.313 Cannot find device "nvmf_tgt_br" 00:31:04.313 18:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # true 00:31:04.313 18:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:31:04.313 Cannot find device "nvmf_tgt_br2" 00:31:04.313 18:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # true 00:31:04.313 18:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:31:04.313 18:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:31:04.313 Cannot find device "nvmf_tgt_br" 00:31:04.313 18:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # true 00:31:04.313 18:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:31:04.313 Cannot find device "nvmf_tgt_br2" 00:31:04.313 18:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # true 00:31:04.313 18:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:31:04.313 18:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:31:04.313 18:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:31:04.313 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:31:04.313 18:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:31:04.313 18:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:31:04.313 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:31:04.313 18:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:31:04.313 18:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:31:04.313 18:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:31:04.313 18:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:31:04.313 18:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:31:04.313 18:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:31:04.571 18:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:31:04.571 18:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:31:04.571 18:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:31:04.571 18:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:31:04.571 18:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:31:04.571 18:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:31:04.571 18:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:31:04.571 18:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:31:04.571 18:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:31:04.571 18:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:31:04.571 18:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:31:04.571 18:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:31:04.571 18:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:31:04.571 18:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:31:04.571 18:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:31:04.571 18:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:31:04.571 18:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:31:04.571 18:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:31:04.571 18:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:31:04.571 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:04.571 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.085 ms 00:31:04.571 00:31:04.571 --- 10.0.0.2 ping statistics --- 00:31:04.571 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:04.571 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:31:04.571 18:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:31:04.571 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:31:04.571 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:31:04.571 00:31:04.571 --- 10.0.0.3 ping statistics --- 00:31:04.571 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:04.571 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:31:04.571 18:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:31:04.571 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:04.571 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:31:04.571 00:31:04.571 --- 10.0.0.1 ping statistics --- 00:31:04.571 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:04.571 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:31:04.571 18:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:04.571 18:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@433 -- # return 0 00:31:04.571 18:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:04.571 18:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:04.571 18:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:04.571 18:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:04.571 18:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:04.571 18:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:04.571 18:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:04.571 18:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:31:04.571 18:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:04.571 18:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:04.571 18:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:04.571 18:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=101789 00:31:04.571 18:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:31:04.571 18:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 101789 00:31:04.571 18:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 101789 ']' 00:31:04.571 18:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:04.571 18:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:04.571 18:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:04.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:04.571 18:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:04.571 18:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:04.829 [2024-07-22 18:38:16.609413] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:31:04.829 [2024-07-22 18:38:16.609607] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:04.829 [2024-07-22 18:38:16.793599] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:05.396 [2024-07-22 18:38:17.121937] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:05.396 [2024-07-22 18:38:17.122077] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:05.396 [2024-07-22 18:38:17.122105] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:05.396 [2024-07-22 18:38:17.122129] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:05.396 [2024-07-22 18:38:17.122148] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:05.396 [2024-07-22 18:38:17.122226] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:05.655 18:38:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:05.655 18:38:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:31:05.655 18:38:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:05.655 18:38:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:05.655 18:38:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:05.655 18:38:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:05.655 18:38:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:31:05.655 18:38:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:05.655 18:38:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:05.655 [2024-07-22 18:38:17.601428] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:05.655 [2024-07-22 18:38:17.609654] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:31:05.655 null0 00:31:05.655 [2024-07-22 18:38:17.641587] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:05.655 18:38:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:05.655 18:38:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=101835 00:31:05.655 18:38:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:31:05.655 18:38:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 101835 /tmp/host.sock 00:31:05.655 18:38:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 101835 ']' 00:31:05.655 18:38:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:31:05.655 18:38:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:05.655 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:31:05.655 18:38:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:31:05.655 18:38:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:05.655 18:38:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:05.914 [2024-07-22 18:38:17.795084] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:31:05.914 [2024-07-22 18:38:17.795258] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101835 ] 00:31:06.173 [2024-07-22 18:38:17.973607] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:06.432 [2024-07-22 18:38:18.279718] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:07.001 18:38:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:07.001 18:38:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:31:07.001 18:38:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:07.001 18:38:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:31:07.001 18:38:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:07.001 18:38:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:07.001 18:38:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:07.001 18:38:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:31:07.001 18:38:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:07.001 18:38:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:07.260 18:38:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:07.260 18:38:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:31:07.260 18:38:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:07.260 18:38:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:08.196 [2024-07-22 18:38:20.106818] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:08.196 [2024-07-22 18:38:20.106884] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:08.196 [2024-07-22 18:38:20.106927] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:08.196 [2024-07-22 18:38:20.195071] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:31:08.455 [2024-07-22 18:38:20.259121] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:31:08.455 [2024-07-22 18:38:20.259222] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:31:08.455 [2024-07-22 18:38:20.259289] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:31:08.455 [2024-07-22 18:38:20.259321] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:08.456 [2024-07-22 18:38:20.259372] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:08.456 18:38:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:08.456 18:38:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:31:08.456 18:38:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:08.456 [2024-07-22 18:38:20.265274] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x61500002b000 was disconnected and freed. delete nvme_qpair. 00:31:08.456 18:38:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:08.456 18:38:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:08.456 18:38:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:08.456 18:38:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:08.456 18:38:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:08.456 18:38:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:08.456 18:38:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:08.456 18:38:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:31:08.456 18:38:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:31:08.456 18:38:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:31:08.456 18:38:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:31:08.456 18:38:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:08.456 18:38:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:08.456 18:38:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:08.456 18:38:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:08.456 18:38:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:08.456 18:38:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:08.456 18:38:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:08.456 18:38:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:08.456 18:38:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:08.456 18:38:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:09.392 18:38:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:09.392 18:38:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:09.392 18:38:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:09.392 18:38:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:09.392 18:38:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:09.393 18:38:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:09.393 18:38:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:09.651 18:38:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:09.651 18:38:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:09.651 18:38:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:10.598 18:38:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:10.598 18:38:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:10.598 18:38:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:10.598 18:38:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:10.598 18:38:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:10.598 18:38:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:10.598 18:38:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:10.598 18:38:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:10.598 18:38:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:10.598 18:38:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:11.548 18:38:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:11.548 18:38:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:11.548 18:38:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:11.548 18:38:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:11.548 18:38:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:11.548 18:38:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:11.548 18:38:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:11.806 18:38:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:11.806 18:38:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:11.806 18:38:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:12.741 18:38:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:12.741 18:38:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:12.741 18:38:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:12.741 18:38:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:12.741 18:38:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:12.741 18:38:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:12.741 18:38:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:12.741 18:38:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:12.741 18:38:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:12.741 18:38:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:13.675 18:38:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:13.675 18:38:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:13.675 18:38:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:13.675 18:38:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:13.675 18:38:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:13.675 18:38:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:13.675 18:38:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:13.675 [2024-07-22 18:38:25.686349] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:31:13.675 [2024-07-22 18:38:25.686485] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:13.675 [2024-07-22 18:38:25.686511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.675 [2024-07-22 18:38:25.686533] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:13.675 [2024-07-22 18:38:25.686548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.676 [2024-07-22 18:38:25.686563] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:13.676 [2024-07-22 18:38:25.686577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.676 [2024-07-22 18:38:25.686602] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:13.676 [2024-07-22 18:38:25.686617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.676 [2024-07-22 18:38:25.686632] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:31:13.676 [2024-07-22 18:38:25.686646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.676 [2024-07-22 18:38:25.686661] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ad80 is same with the state(5) to be set 00:31:13.934 [2024-07-22 18:38:25.696338] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (9): Bad file descriptor 00:31:13.934 18:38:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:13.934 [2024-07-22 18:38:25.706394] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:13.934 18:38:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:13.934 18:38:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:14.867 [2024-07-22 18:38:26.718311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:31:14.867 [2024-07-22 18:38:26.718493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002ad80 with addr=10.0.0.2, port=4420 00:31:14.867 [2024-07-22 18:38:26.718563] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ad80 is same with the state(5) to be set 00:31:14.867 [2024-07-22 18:38:26.718701] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (9): Bad file descriptor 00:31:14.867 [2024-07-22 18:38:26.718964] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:14.867 [2024-07-22 18:38:26.719047] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:14.867 [2024-07-22 18:38:26.719084] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:14.867 [2024-07-22 18:38:26.719123] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:14.867 [2024-07-22 18:38:26.719209] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:14.867 [2024-07-22 18:38:26.719259] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:14.867 18:38:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:14.867 18:38:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:14.867 18:38:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:14.867 18:38:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:14.867 18:38:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:14.867 18:38:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:14.867 18:38:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:14.867 18:38:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:14.867 18:38:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:14.867 18:38:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:15.801 [2024-07-22 18:38:27.719426] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:15.801 [2024-07-22 18:38:27.719519] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:15.801 [2024-07-22 18:38:27.719541] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:15.801 [2024-07-22 18:38:27.719558] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:31:15.801 [2024-07-22 18:38:27.719598] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:15.801 [2024-07-22 18:38:27.719669] bdev_nvme.c:6734:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:31:15.801 [2024-07-22 18:38:27.719760] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:15.801 [2024-07-22 18:38:27.719784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.801 [2024-07-22 18:38:27.719806] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:15.801 [2024-07-22 18:38:27.719820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.801 [2024-07-22 18:38:27.719834] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:15.801 [2024-07-22 18:38:27.719847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.801 [2024-07-22 18:38:27.719875] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:15.801 [2024-07-22 18:38:27.719893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.801 [2024-07-22 18:38:27.719908] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:31:15.801 [2024-07-22 18:38:27.719922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.801 [2024-07-22 18:38:27.719936] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:31:15.801 [2024-07-22 18:38:27.720045] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:31:15.801 [2024-07-22 18:38:27.721040] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:31:15.801 [2024-07-22 18:38:27.721064] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:31:15.801 18:38:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:15.801 18:38:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:15.801 18:38:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:15.801 18:38:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:15.801 18:38:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:15.801 18:38:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:15.801 18:38:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:15.801 18:38:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:16.060 18:38:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:31:16.060 18:38:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:31:16.060 18:38:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:31:16.060 18:38:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:31:16.060 18:38:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:16.060 18:38:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:16.060 18:38:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:16.060 18:38:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:16.060 18:38:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:16.060 18:38:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:16.060 18:38:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:16.060 18:38:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:16.060 18:38:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:31:16.060 18:38:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:16.995 18:38:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:16.995 18:38:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:16.995 18:38:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:16.995 18:38:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:16.995 18:38:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:16.995 18:38:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:16.995 18:38:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:16.995 18:38:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:16.995 18:38:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:31:16.995 18:38:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:17.930 [2024-07-22 18:38:29.729984] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:17.930 [2024-07-22 18:38:29.730060] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:17.930 [2024-07-22 18:38:29.730105] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:17.930 [2024-07-22 18:38:29.816210] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:31:17.930 [2024-07-22 18:38:29.882422] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:31:17.930 [2024-07-22 18:38:29.882510] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:31:17.930 [2024-07-22 18:38:29.882588] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:31:17.930 [2024-07-22 18:38:29.882619] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:31:17.930 [2024-07-22 18:38:29.882637] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:17.930 [2024-07-22 18:38:29.888604] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x61500002b780 was disconnected and freed. delete nvme_qpair. 00:31:18.188 18:38:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:18.188 18:38:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:18.188 18:38:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:18.188 18:38:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:18.188 18:38:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:18.188 18:38:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:18.188 18:38:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:18.188 18:38:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:18.188 18:38:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:31:18.188 18:38:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:31:18.188 18:38:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 101835 00:31:18.188 18:38:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 101835 ']' 00:31:18.188 18:38:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 101835 00:31:18.188 18:38:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:31:18.188 18:38:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:18.188 18:38:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 101835 00:31:18.188 killing process with pid 101835 00:31:18.188 18:38:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:18.188 18:38:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:18.189 18:38:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 101835' 00:31:18.189 18:38:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 101835 00:31:18.189 18:38:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 101835 00:31:19.563 18:38:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:31:19.563 18:38:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:19.563 18:38:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:31:19.563 18:38:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:19.563 18:38:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:31:19.563 18:38:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:19.563 18:38:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:19.563 rmmod nvme_tcp 00:31:19.563 rmmod nvme_fabrics 00:31:19.563 rmmod nvme_keyring 00:31:19.563 18:38:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:19.563 18:38:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:31:19.563 18:38:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:31:19.563 18:38:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 101789 ']' 00:31:19.563 18:38:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 101789 00:31:19.563 18:38:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 101789 ']' 00:31:19.564 18:38:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 101789 00:31:19.564 18:38:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:31:19.564 18:38:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:19.564 18:38:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 101789 00:31:19.564 killing process with pid 101789 00:31:19.564 18:38:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:31:19.564 18:38:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:31:19.564 18:38:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 101789' 00:31:19.564 18:38:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 101789 00:31:19.564 18:38:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 101789 00:31:20.941 18:38:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:20.941 18:38:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:20.941 18:38:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:20.941 18:38:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:20.941 18:38:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:20.941 18:38:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:20.941 18:38:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:20.941 18:38:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:20.941 18:38:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:31:20.941 00:31:20.941 real 0m16.887s 00:31:20.941 user 0m29.323s 00:31:20.941 sys 0m2.045s 00:31:20.941 18:38:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:20.941 18:38:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:20.941 ************************************ 00:31:20.941 END TEST nvmf_discovery_remove_ifc 00:31:20.941 ************************************ 00:31:20.941 18:38:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:31:20.941 18:38:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:31:20.941 18:38:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:31:20.941 18:38:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:20.941 18:38:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:20.941 ************************************ 00:31:20.941 START TEST nvmf_identify_kernel_target 00:31:20.941 ************************************ 00:31:20.941 18:38:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:31:21.201 * Looking for test storage... 00:31:21.201 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:31:21.201 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:31:21.201 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:31:21.201 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:21.201 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:21.201 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:21.201 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:21.201 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:21.201 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:21.201 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:21.201 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:21.201 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:21.201 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:21.201 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:31:21.201 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=0b8484e2-e129-4a11-8748-0b3c728771da 00:31:21.201 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:21.201 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:21.201 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:31:21.201 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:21.201 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:31:21.201 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:21.201 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:21.201 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:21.201 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:21.201 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:21.201 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:21.201 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:31:21.201 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:21.201 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:31:21.201 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:21.201 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:21.201 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:21.201 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:21.201 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:21.201 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:21.201 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:21.201 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:21.201 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:31:21.201 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:21.201 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:21.201 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:21.201 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:21.201 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:21.201 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:21.201 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:21.201 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:21.201 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:31:21.201 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:31:21.201 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:31:21.201 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:31:21.201 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:31:21.201 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:31:21.201 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:21.201 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:21.201 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:31:21.201 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:31:21.201 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:31:21.201 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:31:21.201 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:31:21.202 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:21.202 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:31:21.202 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:31:21.202 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:31:21.202 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:31:21.202 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:31:21.202 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:31:21.202 Cannot find device "nvmf_tgt_br" 00:31:21.202 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # true 00:31:21.202 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:31:21.202 Cannot find device "nvmf_tgt_br2" 00:31:21.202 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # true 00:31:21.202 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:31:21.202 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:31:21.202 Cannot find device "nvmf_tgt_br" 00:31:21.202 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # true 00:31:21.202 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:31:21.202 Cannot find device "nvmf_tgt_br2" 00:31:21.202 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # true 00:31:21.202 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:31:21.202 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:31:21.202 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:31:21.202 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:31:21.202 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:31:21.202 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:31:21.202 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:31:21.202 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:31:21.202 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:31:21.202 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:31:21.202 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:31:21.202 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:31:21.467 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:31:21.467 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:31:21.467 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:31:21.467 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:31:21.467 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:31:21.467 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:31:21.467 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:31:21.467 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:31:21.467 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:31:21.467 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:31:21.467 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:31:21.467 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:31:21.467 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:31:21.467 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:31:21.467 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:31:21.467 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:31:21.467 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:31:21.467 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:31:21.467 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:31:21.467 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:31:21.467 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:21.467 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.100 ms 00:31:21.467 00:31:21.467 --- 10.0.0.2 ping statistics --- 00:31:21.467 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:21.467 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:31:21.467 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:31:21.467 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:31:21.467 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:31:21.467 00:31:21.467 --- 10.0.0.3 ping statistics --- 00:31:21.467 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:21.467 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:31:21.467 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:31:21.467 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:21.467 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:31:21.467 00:31:21.467 --- 10.0.0.1 ping statistics --- 00:31:21.467 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:21.467 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:31:21.467 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:21.467 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@433 -- # return 0 00:31:21.467 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:21.467 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:21.467 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:21.467 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:21.467 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:21.467 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:21.467 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:21.467 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:31:21.467 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:31:21.467 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:31:21.467 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:21.467 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:21.467 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:21.467 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:21.467 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:21.467 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:21.467 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:21.467 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:21.467 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:21.467 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:31:21.467 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:31:21.467 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:31:21.467 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:31:21.467 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:21.467 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:21.467 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:31:21.467 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:31:21.467 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:31:21.467 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:31:21.467 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:31:21.467 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:31:21.725 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:21.983 Waiting for block devices as requested 00:31:21.983 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:31:21.983 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:31:21.983 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:31:21.983 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:31:21.983 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:31:21.983 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:31:21.983 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:31:21.983 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:31:21.983 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:31:21.983 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:31:21.983 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:31:21.983 No valid GPT data, bailing 00:31:21.983 18:38:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:31:22.242 18:38:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:31:22.242 18:38:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:31:22.242 18:38:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:31:22.242 18:38:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:31:22.242 18:38:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:31:22.242 18:38:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:31:22.242 18:38:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:31:22.242 18:38:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:31:22.242 18:38:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:31:22.242 18:38:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:31:22.242 18:38:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:31:22.242 18:38:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:31:22.242 No valid GPT data, bailing 00:31:22.242 18:38:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:31:22.242 18:38:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:31:22.242 18:38:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:31:22.242 18:38:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:31:22.242 18:38:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:31:22.242 18:38:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:31:22.242 18:38:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:31:22.242 18:38:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:31:22.242 18:38:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:31:22.242 18:38:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:31:22.242 18:38:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:31:22.242 18:38:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:31:22.242 18:38:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:31:22.242 No valid GPT data, bailing 00:31:22.242 18:38:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:31:22.242 18:38:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:31:22.242 18:38:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:31:22.242 18:38:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:31:22.242 18:38:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:31:22.242 18:38:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:31:22.242 18:38:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:31:22.242 18:38:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:31:22.242 18:38:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:31:22.242 18:38:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:31:22.242 18:38:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:31:22.243 18:38:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:31:22.243 18:38:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:31:22.243 No valid GPT data, bailing 00:31:22.243 18:38:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:31:22.243 18:38:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:31:22.243 18:38:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:31:22.243 18:38:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:31:22.243 18:38:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:31:22.243 18:38:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:22.243 18:38:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:22.243 18:38:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:31:22.243 18:38:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:31:22.243 18:38:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:31:22.243 18:38:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:31:22.243 18:38:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:31:22.243 18:38:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:31:22.243 18:38:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:31:22.243 18:38:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:31:22.243 18:38:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:31:22.243 18:38:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:31:22.501 18:38:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --hostid=0b8484e2-e129-4a11-8748-0b3c728771da -a 10.0.0.1 -t tcp -s 4420 00:31:22.501 00:31:22.501 Discovery Log Number of Records 2, Generation counter 2 00:31:22.501 =====Discovery Log Entry 0====== 00:31:22.501 trtype: tcp 00:31:22.501 adrfam: ipv4 00:31:22.501 subtype: current discovery subsystem 00:31:22.501 treq: not specified, sq flow control disable supported 00:31:22.501 portid: 1 00:31:22.501 trsvcid: 4420 00:31:22.501 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:31:22.501 traddr: 10.0.0.1 00:31:22.502 eflags: none 00:31:22.502 sectype: none 00:31:22.502 =====Discovery Log Entry 1====== 00:31:22.502 trtype: tcp 00:31:22.502 adrfam: ipv4 00:31:22.502 subtype: nvme subsystem 00:31:22.502 treq: not specified, sq flow control disable supported 00:31:22.502 portid: 1 00:31:22.502 trsvcid: 4420 00:31:22.502 subnqn: nqn.2016-06.io.spdk:testnqn 00:31:22.502 traddr: 10.0.0.1 00:31:22.502 eflags: none 00:31:22.502 sectype: none 00:31:22.502 18:38:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:31:22.502 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:31:22.502 ===================================================== 00:31:22.502 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:31:22.502 ===================================================== 00:31:22.502 Controller Capabilities/Features 00:31:22.502 ================================ 00:31:22.502 Vendor ID: 0000 00:31:22.502 Subsystem Vendor ID: 0000 00:31:22.502 Serial Number: e407b36c6e2a1d993001 00:31:22.502 Model Number: Linux 00:31:22.502 Firmware Version: 6.7.0-68 00:31:22.502 Recommended Arb Burst: 0 00:31:22.502 IEEE OUI Identifier: 00 00 00 00:31:22.502 Multi-path I/O 00:31:22.502 May have multiple subsystem ports: No 00:31:22.502 May have multiple controllers: No 00:31:22.502 Associated with SR-IOV VF: No 00:31:22.502 Max Data Transfer Size: Unlimited 00:31:22.502 Max Number of Namespaces: 0 00:31:22.502 Max Number of I/O Queues: 1024 00:31:22.502 NVMe Specification Version (VS): 1.3 00:31:22.502 NVMe Specification Version (Identify): 1.3 00:31:22.502 Maximum Queue Entries: 1024 00:31:22.502 Contiguous Queues Required: No 00:31:22.502 Arbitration Mechanisms Supported 00:31:22.502 Weighted Round Robin: Not Supported 00:31:22.502 Vendor Specific: Not Supported 00:31:22.502 Reset Timeout: 7500 ms 00:31:22.502 Doorbell Stride: 4 bytes 00:31:22.502 NVM Subsystem Reset: Not Supported 00:31:22.502 Command Sets Supported 00:31:22.502 NVM Command Set: Supported 00:31:22.502 Boot Partition: Not Supported 00:31:22.502 Memory Page Size Minimum: 4096 bytes 00:31:22.502 Memory Page Size Maximum: 4096 bytes 00:31:22.502 Persistent Memory Region: Not Supported 00:31:22.502 Optional Asynchronous Events Supported 00:31:22.502 Namespace Attribute Notices: Not Supported 00:31:22.502 Firmware Activation Notices: Not Supported 00:31:22.502 ANA Change Notices: Not Supported 00:31:22.502 PLE Aggregate Log Change Notices: Not Supported 00:31:22.502 LBA Status Info Alert Notices: Not Supported 00:31:22.502 EGE Aggregate Log Change Notices: Not Supported 00:31:22.502 Normal NVM Subsystem Shutdown event: Not Supported 00:31:22.502 Zone Descriptor Change Notices: Not Supported 00:31:22.502 Discovery Log Change Notices: Supported 00:31:22.502 Controller Attributes 00:31:22.502 128-bit Host Identifier: Not Supported 00:31:22.502 Non-Operational Permissive Mode: Not Supported 00:31:22.502 NVM Sets: Not Supported 00:31:22.502 Read Recovery Levels: Not Supported 00:31:22.502 Endurance Groups: Not Supported 00:31:22.502 Predictable Latency Mode: Not Supported 00:31:22.502 Traffic Based Keep ALive: Not Supported 00:31:22.502 Namespace Granularity: Not Supported 00:31:22.502 SQ Associations: Not Supported 00:31:22.502 UUID List: Not Supported 00:31:22.502 Multi-Domain Subsystem: Not Supported 00:31:22.502 Fixed Capacity Management: Not Supported 00:31:22.502 Variable Capacity Management: Not Supported 00:31:22.502 Delete Endurance Group: Not Supported 00:31:22.502 Delete NVM Set: Not Supported 00:31:22.502 Extended LBA Formats Supported: Not Supported 00:31:22.502 Flexible Data Placement Supported: Not Supported 00:31:22.502 00:31:22.502 Controller Memory Buffer Support 00:31:22.502 ================================ 00:31:22.502 Supported: No 00:31:22.502 00:31:22.502 Persistent Memory Region Support 00:31:22.502 ================================ 00:31:22.502 Supported: No 00:31:22.502 00:31:22.502 Admin Command Set Attributes 00:31:22.502 ============================ 00:31:22.502 Security Send/Receive: Not Supported 00:31:22.502 Format NVM: Not Supported 00:31:22.502 Firmware Activate/Download: Not Supported 00:31:22.502 Namespace Management: Not Supported 00:31:22.502 Device Self-Test: Not Supported 00:31:22.502 Directives: Not Supported 00:31:22.502 NVMe-MI: Not Supported 00:31:22.502 Virtualization Management: Not Supported 00:31:22.502 Doorbell Buffer Config: Not Supported 00:31:22.502 Get LBA Status Capability: Not Supported 00:31:22.502 Command & Feature Lockdown Capability: Not Supported 00:31:22.502 Abort Command Limit: 1 00:31:22.502 Async Event Request Limit: 1 00:31:22.502 Number of Firmware Slots: N/A 00:31:22.502 Firmware Slot 1 Read-Only: N/A 00:31:22.761 Firmware Activation Without Reset: N/A 00:31:22.761 Multiple Update Detection Support: N/A 00:31:22.761 Firmware Update Granularity: No Information Provided 00:31:22.761 Per-Namespace SMART Log: No 00:31:22.761 Asymmetric Namespace Access Log Page: Not Supported 00:31:22.761 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:31:22.761 Command Effects Log Page: Not Supported 00:31:22.761 Get Log Page Extended Data: Supported 00:31:22.761 Telemetry Log Pages: Not Supported 00:31:22.761 Persistent Event Log Pages: Not Supported 00:31:22.761 Supported Log Pages Log Page: May Support 00:31:22.761 Commands Supported & Effects Log Page: Not Supported 00:31:22.761 Feature Identifiers & Effects Log Page:May Support 00:31:22.761 NVMe-MI Commands & Effects Log Page: May Support 00:31:22.761 Data Area 4 for Telemetry Log: Not Supported 00:31:22.761 Error Log Page Entries Supported: 1 00:31:22.762 Keep Alive: Not Supported 00:31:22.762 00:31:22.762 NVM Command Set Attributes 00:31:22.762 ========================== 00:31:22.762 Submission Queue Entry Size 00:31:22.762 Max: 1 00:31:22.762 Min: 1 00:31:22.762 Completion Queue Entry Size 00:31:22.762 Max: 1 00:31:22.762 Min: 1 00:31:22.762 Number of Namespaces: 0 00:31:22.762 Compare Command: Not Supported 00:31:22.762 Write Uncorrectable Command: Not Supported 00:31:22.762 Dataset Management Command: Not Supported 00:31:22.762 Write Zeroes Command: Not Supported 00:31:22.762 Set Features Save Field: Not Supported 00:31:22.762 Reservations: Not Supported 00:31:22.762 Timestamp: Not Supported 00:31:22.762 Copy: Not Supported 00:31:22.762 Volatile Write Cache: Not Present 00:31:22.762 Atomic Write Unit (Normal): 1 00:31:22.762 Atomic Write Unit (PFail): 1 00:31:22.762 Atomic Compare & Write Unit: 1 00:31:22.762 Fused Compare & Write: Not Supported 00:31:22.762 Scatter-Gather List 00:31:22.762 SGL Command Set: Supported 00:31:22.762 SGL Keyed: Not Supported 00:31:22.762 SGL Bit Bucket Descriptor: Not Supported 00:31:22.762 SGL Metadata Pointer: Not Supported 00:31:22.762 Oversized SGL: Not Supported 00:31:22.762 SGL Metadata Address: Not Supported 00:31:22.762 SGL Offset: Supported 00:31:22.762 Transport SGL Data Block: Not Supported 00:31:22.762 Replay Protected Memory Block: Not Supported 00:31:22.762 00:31:22.762 Firmware Slot Information 00:31:22.762 ========================= 00:31:22.762 Active slot: 0 00:31:22.762 00:31:22.762 00:31:22.762 Error Log 00:31:22.762 ========= 00:31:22.762 00:31:22.762 Active Namespaces 00:31:22.762 ================= 00:31:22.762 Discovery Log Page 00:31:22.762 ================== 00:31:22.762 Generation Counter: 2 00:31:22.762 Number of Records: 2 00:31:22.762 Record Format: 0 00:31:22.762 00:31:22.762 Discovery Log Entry 0 00:31:22.762 ---------------------- 00:31:22.762 Transport Type: 3 (TCP) 00:31:22.762 Address Family: 1 (IPv4) 00:31:22.762 Subsystem Type: 3 (Current Discovery Subsystem) 00:31:22.762 Entry Flags: 00:31:22.762 Duplicate Returned Information: 0 00:31:22.762 Explicit Persistent Connection Support for Discovery: 0 00:31:22.762 Transport Requirements: 00:31:22.762 Secure Channel: Not Specified 00:31:22.762 Port ID: 1 (0x0001) 00:31:22.762 Controller ID: 65535 (0xffff) 00:31:22.762 Admin Max SQ Size: 32 00:31:22.762 Transport Service Identifier: 4420 00:31:22.762 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:31:22.762 Transport Address: 10.0.0.1 00:31:22.762 Discovery Log Entry 1 00:31:22.762 ---------------------- 00:31:22.762 Transport Type: 3 (TCP) 00:31:22.762 Address Family: 1 (IPv4) 00:31:22.762 Subsystem Type: 2 (NVM Subsystem) 00:31:22.762 Entry Flags: 00:31:22.762 Duplicate Returned Information: 0 00:31:22.762 Explicit Persistent Connection Support for Discovery: 0 00:31:22.762 Transport Requirements: 00:31:22.762 Secure Channel: Not Specified 00:31:22.762 Port ID: 1 (0x0001) 00:31:22.762 Controller ID: 65535 (0xffff) 00:31:22.762 Admin Max SQ Size: 32 00:31:22.762 Transport Service Identifier: 4420 00:31:22.762 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:31:22.762 Transport Address: 10.0.0.1 00:31:22.762 18:38:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:22.762 get_feature(0x01) failed 00:31:22.762 get_feature(0x02) failed 00:31:22.762 get_feature(0x04) failed 00:31:22.762 ===================================================== 00:31:22.762 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:31:22.762 ===================================================== 00:31:22.762 Controller Capabilities/Features 00:31:22.762 ================================ 00:31:22.762 Vendor ID: 0000 00:31:22.762 Subsystem Vendor ID: 0000 00:31:22.762 Serial Number: 6eb60fb43de55dc2b129 00:31:22.762 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:31:22.762 Firmware Version: 6.7.0-68 00:31:22.762 Recommended Arb Burst: 6 00:31:22.762 IEEE OUI Identifier: 00 00 00 00:31:22.762 Multi-path I/O 00:31:22.762 May have multiple subsystem ports: Yes 00:31:22.762 May have multiple controllers: Yes 00:31:22.762 Associated with SR-IOV VF: No 00:31:22.762 Max Data Transfer Size: Unlimited 00:31:22.762 Max Number of Namespaces: 1024 00:31:22.762 Max Number of I/O Queues: 128 00:31:22.762 NVMe Specification Version (VS): 1.3 00:31:22.762 NVMe Specification Version (Identify): 1.3 00:31:22.762 Maximum Queue Entries: 1024 00:31:22.762 Contiguous Queues Required: No 00:31:22.762 Arbitration Mechanisms Supported 00:31:22.762 Weighted Round Robin: Not Supported 00:31:22.762 Vendor Specific: Not Supported 00:31:22.762 Reset Timeout: 7500 ms 00:31:22.762 Doorbell Stride: 4 bytes 00:31:22.762 NVM Subsystem Reset: Not Supported 00:31:22.762 Command Sets Supported 00:31:22.762 NVM Command Set: Supported 00:31:22.762 Boot Partition: Not Supported 00:31:22.762 Memory Page Size Minimum: 4096 bytes 00:31:22.762 Memory Page Size Maximum: 4096 bytes 00:31:22.762 Persistent Memory Region: Not Supported 00:31:22.762 Optional Asynchronous Events Supported 00:31:22.762 Namespace Attribute Notices: Supported 00:31:22.762 Firmware Activation Notices: Not Supported 00:31:22.762 ANA Change Notices: Supported 00:31:22.762 PLE Aggregate Log Change Notices: Not Supported 00:31:22.762 LBA Status Info Alert Notices: Not Supported 00:31:22.762 EGE Aggregate Log Change Notices: Not Supported 00:31:22.762 Normal NVM Subsystem Shutdown event: Not Supported 00:31:22.762 Zone Descriptor Change Notices: Not Supported 00:31:22.762 Discovery Log Change Notices: Not Supported 00:31:22.762 Controller Attributes 00:31:22.762 128-bit Host Identifier: Supported 00:31:22.762 Non-Operational Permissive Mode: Not Supported 00:31:22.762 NVM Sets: Not Supported 00:31:22.762 Read Recovery Levels: Not Supported 00:31:22.762 Endurance Groups: Not Supported 00:31:22.762 Predictable Latency Mode: Not Supported 00:31:22.762 Traffic Based Keep ALive: Supported 00:31:22.762 Namespace Granularity: Not Supported 00:31:22.762 SQ Associations: Not Supported 00:31:22.762 UUID List: Not Supported 00:31:22.762 Multi-Domain Subsystem: Not Supported 00:31:22.762 Fixed Capacity Management: Not Supported 00:31:22.762 Variable Capacity Management: Not Supported 00:31:22.762 Delete Endurance Group: Not Supported 00:31:22.762 Delete NVM Set: Not Supported 00:31:22.762 Extended LBA Formats Supported: Not Supported 00:31:22.762 Flexible Data Placement Supported: Not Supported 00:31:22.762 00:31:22.762 Controller Memory Buffer Support 00:31:22.762 ================================ 00:31:22.762 Supported: No 00:31:22.762 00:31:22.762 Persistent Memory Region Support 00:31:22.762 ================================ 00:31:22.762 Supported: No 00:31:22.762 00:31:22.762 Admin Command Set Attributes 00:31:22.762 ============================ 00:31:22.762 Security Send/Receive: Not Supported 00:31:22.762 Format NVM: Not Supported 00:31:22.762 Firmware Activate/Download: Not Supported 00:31:22.762 Namespace Management: Not Supported 00:31:22.762 Device Self-Test: Not Supported 00:31:22.762 Directives: Not Supported 00:31:22.762 NVMe-MI: Not Supported 00:31:22.762 Virtualization Management: Not Supported 00:31:22.762 Doorbell Buffer Config: Not Supported 00:31:22.762 Get LBA Status Capability: Not Supported 00:31:22.762 Command & Feature Lockdown Capability: Not Supported 00:31:22.762 Abort Command Limit: 4 00:31:22.762 Async Event Request Limit: 4 00:31:22.762 Number of Firmware Slots: N/A 00:31:22.762 Firmware Slot 1 Read-Only: N/A 00:31:22.762 Firmware Activation Without Reset: N/A 00:31:22.762 Multiple Update Detection Support: N/A 00:31:22.762 Firmware Update Granularity: No Information Provided 00:31:22.762 Per-Namespace SMART Log: Yes 00:31:22.763 Asymmetric Namespace Access Log Page: Supported 00:31:22.763 ANA Transition Time : 10 sec 00:31:22.763 00:31:22.763 Asymmetric Namespace Access Capabilities 00:31:22.763 ANA Optimized State : Supported 00:31:22.763 ANA Non-Optimized State : Supported 00:31:22.763 ANA Inaccessible State : Supported 00:31:22.763 ANA Persistent Loss State : Supported 00:31:22.763 ANA Change State : Supported 00:31:22.763 ANAGRPID is not changed : No 00:31:22.763 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:31:22.763 00:31:22.763 ANA Group Identifier Maximum : 128 00:31:22.763 Number of ANA Group Identifiers : 128 00:31:22.763 Max Number of Allowed Namespaces : 1024 00:31:22.763 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:31:22.763 Command Effects Log Page: Supported 00:31:22.763 Get Log Page Extended Data: Supported 00:31:22.763 Telemetry Log Pages: Not Supported 00:31:22.763 Persistent Event Log Pages: Not Supported 00:31:22.763 Supported Log Pages Log Page: May Support 00:31:22.763 Commands Supported & Effects Log Page: Not Supported 00:31:22.763 Feature Identifiers & Effects Log Page:May Support 00:31:22.763 NVMe-MI Commands & Effects Log Page: May Support 00:31:22.763 Data Area 4 for Telemetry Log: Not Supported 00:31:22.763 Error Log Page Entries Supported: 128 00:31:22.763 Keep Alive: Supported 00:31:22.763 Keep Alive Granularity: 1000 ms 00:31:22.763 00:31:22.763 NVM Command Set Attributes 00:31:22.763 ========================== 00:31:22.763 Submission Queue Entry Size 00:31:22.763 Max: 64 00:31:22.763 Min: 64 00:31:22.763 Completion Queue Entry Size 00:31:22.763 Max: 16 00:31:22.763 Min: 16 00:31:22.763 Number of Namespaces: 1024 00:31:22.763 Compare Command: Not Supported 00:31:22.763 Write Uncorrectable Command: Not Supported 00:31:22.763 Dataset Management Command: Supported 00:31:22.763 Write Zeroes Command: Supported 00:31:22.763 Set Features Save Field: Not Supported 00:31:22.763 Reservations: Not Supported 00:31:22.763 Timestamp: Not Supported 00:31:22.763 Copy: Not Supported 00:31:22.763 Volatile Write Cache: Present 00:31:22.763 Atomic Write Unit (Normal): 1 00:31:22.763 Atomic Write Unit (PFail): 1 00:31:22.763 Atomic Compare & Write Unit: 1 00:31:22.763 Fused Compare & Write: Not Supported 00:31:22.763 Scatter-Gather List 00:31:22.763 SGL Command Set: Supported 00:31:22.763 SGL Keyed: Not Supported 00:31:22.763 SGL Bit Bucket Descriptor: Not Supported 00:31:22.763 SGL Metadata Pointer: Not Supported 00:31:22.763 Oversized SGL: Not Supported 00:31:22.763 SGL Metadata Address: Not Supported 00:31:22.763 SGL Offset: Supported 00:31:22.763 Transport SGL Data Block: Not Supported 00:31:22.763 Replay Protected Memory Block: Not Supported 00:31:22.763 00:31:22.763 Firmware Slot Information 00:31:22.763 ========================= 00:31:22.763 Active slot: 0 00:31:22.763 00:31:22.763 Asymmetric Namespace Access 00:31:22.763 =========================== 00:31:22.763 Change Count : 0 00:31:22.763 Number of ANA Group Descriptors : 1 00:31:22.763 ANA Group Descriptor : 0 00:31:22.763 ANA Group ID : 1 00:31:22.763 Number of NSID Values : 1 00:31:22.763 Change Count : 0 00:31:22.763 ANA State : 1 00:31:22.763 Namespace Identifier : 1 00:31:22.763 00:31:22.763 Commands Supported and Effects 00:31:22.763 ============================== 00:31:22.763 Admin Commands 00:31:22.763 -------------- 00:31:22.763 Get Log Page (02h): Supported 00:31:22.763 Identify (06h): Supported 00:31:22.763 Abort (08h): Supported 00:31:22.763 Set Features (09h): Supported 00:31:22.763 Get Features (0Ah): Supported 00:31:22.763 Asynchronous Event Request (0Ch): Supported 00:31:22.763 Keep Alive (18h): Supported 00:31:22.763 I/O Commands 00:31:22.763 ------------ 00:31:22.763 Flush (00h): Supported 00:31:22.763 Write (01h): Supported LBA-Change 00:31:22.763 Read (02h): Supported 00:31:22.763 Write Zeroes (08h): Supported LBA-Change 00:31:22.763 Dataset Management (09h): Supported 00:31:22.763 00:31:22.763 Error Log 00:31:22.763 ========= 00:31:22.763 Entry: 0 00:31:22.763 Error Count: 0x3 00:31:22.763 Submission Queue Id: 0x0 00:31:22.763 Command Id: 0x5 00:31:22.763 Phase Bit: 0 00:31:22.763 Status Code: 0x2 00:31:22.763 Status Code Type: 0x0 00:31:22.763 Do Not Retry: 1 00:31:23.022 Error Location: 0x28 00:31:23.022 LBA: 0x0 00:31:23.022 Namespace: 0x0 00:31:23.022 Vendor Log Page: 0x0 00:31:23.022 ----------- 00:31:23.022 Entry: 1 00:31:23.022 Error Count: 0x2 00:31:23.022 Submission Queue Id: 0x0 00:31:23.022 Command Id: 0x5 00:31:23.022 Phase Bit: 0 00:31:23.022 Status Code: 0x2 00:31:23.022 Status Code Type: 0x0 00:31:23.022 Do Not Retry: 1 00:31:23.022 Error Location: 0x28 00:31:23.022 LBA: 0x0 00:31:23.022 Namespace: 0x0 00:31:23.022 Vendor Log Page: 0x0 00:31:23.022 ----------- 00:31:23.022 Entry: 2 00:31:23.022 Error Count: 0x1 00:31:23.022 Submission Queue Id: 0x0 00:31:23.022 Command Id: 0x4 00:31:23.022 Phase Bit: 0 00:31:23.022 Status Code: 0x2 00:31:23.022 Status Code Type: 0x0 00:31:23.022 Do Not Retry: 1 00:31:23.022 Error Location: 0x28 00:31:23.022 LBA: 0x0 00:31:23.022 Namespace: 0x0 00:31:23.022 Vendor Log Page: 0x0 00:31:23.022 00:31:23.022 Number of Queues 00:31:23.022 ================ 00:31:23.022 Number of I/O Submission Queues: 128 00:31:23.022 Number of I/O Completion Queues: 128 00:31:23.022 00:31:23.022 ZNS Specific Controller Data 00:31:23.022 ============================ 00:31:23.022 Zone Append Size Limit: 0 00:31:23.022 00:31:23.022 00:31:23.022 Active Namespaces 00:31:23.022 ================= 00:31:23.022 get_feature(0x05) failed 00:31:23.022 Namespace ID:1 00:31:23.022 Command Set Identifier: NVM (00h) 00:31:23.022 Deallocate: Supported 00:31:23.022 Deallocated/Unwritten Error: Not Supported 00:31:23.022 Deallocated Read Value: Unknown 00:31:23.022 Deallocate in Write Zeroes: Not Supported 00:31:23.022 Deallocated Guard Field: 0xFFFF 00:31:23.022 Flush: Supported 00:31:23.022 Reservation: Not Supported 00:31:23.022 Namespace Sharing Capabilities: Multiple Controllers 00:31:23.022 Size (in LBAs): 1310720 (5GiB) 00:31:23.022 Capacity (in LBAs): 1310720 (5GiB) 00:31:23.022 Utilization (in LBAs): 1310720 (5GiB) 00:31:23.022 UUID: b0ad6c50-a7fe-4d27-873f-138295567686 00:31:23.022 Thin Provisioning: Not Supported 00:31:23.022 Per-NS Atomic Units: Yes 00:31:23.022 Atomic Boundary Size (Normal): 0 00:31:23.022 Atomic Boundary Size (PFail): 0 00:31:23.022 Atomic Boundary Offset: 0 00:31:23.022 NGUID/EUI64 Never Reused: No 00:31:23.022 ANA group ID: 1 00:31:23.022 Namespace Write Protected: No 00:31:23.022 Number of LBA Formats: 1 00:31:23.022 Current LBA Format: LBA Format #00 00:31:23.022 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:31:23.022 00:31:23.022 18:38:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:31:23.022 18:38:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:23.022 18:38:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:31:23.022 18:38:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:23.022 18:38:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:31:23.022 18:38:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:23.022 18:38:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:23.022 rmmod nvme_tcp 00:31:23.022 rmmod nvme_fabrics 00:31:23.022 18:38:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:23.022 18:38:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:31:23.022 18:38:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:31:23.022 18:38:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:31:23.022 18:38:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:23.022 18:38:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:23.022 18:38:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:23.022 18:38:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:23.022 18:38:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:23.022 18:38:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:23.022 18:38:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:23.022 18:38:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:23.022 18:38:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:31:23.022 18:38:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:31:23.022 18:38:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:31:23.022 18:38:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:31:23.022 18:38:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:23.022 18:38:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:23.022 18:38:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:31:23.022 18:38:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:23.022 18:38:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:31:23.023 18:38:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:31:23.023 18:38:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:31:23.958 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:23.958 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:31:23.958 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:31:23.958 ************************************ 00:31:23.958 END TEST nvmf_identify_kernel_target 00:31:23.958 ************************************ 00:31:23.958 00:31:23.958 real 0m2.895s 00:31:23.958 user 0m0.999s 00:31:23.958 sys 0m1.412s 00:31:23.958 18:38:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:23.958 18:38:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:31:23.958 18:38:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:31:23.958 18:38:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:31:23.958 18:38:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:31:23.958 18:38:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:23.958 18:38:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:23.958 ************************************ 00:31:23.958 START TEST nvmf_auth_host 00:31:23.958 ************************************ 00:31:23.958 18:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:31:24.223 * Looking for test storage... 00:31:24.223 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:31:24.223 18:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:31:24.223 18:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:31:24.223 18:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:24.223 18:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:24.223 18:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:24.223 18:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:24.223 18:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:24.223 18:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:24.223 18:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:24.223 18:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:24.223 18:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:24.223 18:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:24.223 18:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:31:24.223 18:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=0b8484e2-e129-4a11-8748-0b3c728771da 00:31:24.223 18:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:24.223 18:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:24.223 18:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:31:24.223 18:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:24.223 18:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:31:24.223 18:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:24.223 18:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:24.223 18:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:24.223 18:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:24.223 18:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:24.223 18:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:24.223 18:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:31:24.223 18:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:24.223 18:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:31:24.223 18:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:24.223 18:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:24.223 18:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:24.223 18:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:24.223 18:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:24.223 18:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:24.223 18:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:24.223 18:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:24.223 18:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:31:24.223 18:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:31:24.223 18:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:31:24.223 18:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:31:24.223 18:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:31:24.223 18:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:31:24.223 18:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:31:24.223 18:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:31:24.223 18:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:31:24.223 18:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:24.223 18:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:24.223 18:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:24.223 18:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:24.223 18:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:24.223 18:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:24.223 18:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:24.223 18:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:24.223 18:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:31:24.223 18:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:31:24.223 18:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:31:24.223 18:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:31:24.223 18:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:31:24.223 18:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # nvmf_veth_init 00:31:24.223 18:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:24.223 18:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:24.223 18:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:31:24.223 18:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:31:24.223 18:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:31:24.223 18:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:31:24.223 18:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:31:24.223 18:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:24.223 18:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:31:24.223 18:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:31:24.223 18:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:31:24.224 18:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:31:24.224 18:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:31:24.224 18:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:31:24.224 Cannot find device "nvmf_tgt_br" 00:31:24.224 18:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@155 -- # true 00:31:24.224 18:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:31:24.224 Cannot find device "nvmf_tgt_br2" 00:31:24.224 18:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@156 -- # true 00:31:24.224 18:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:31:24.224 18:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:31:24.224 Cannot find device "nvmf_tgt_br" 00:31:24.224 18:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@158 -- # true 00:31:24.224 18:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:31:24.224 Cannot find device "nvmf_tgt_br2" 00:31:24.224 18:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@159 -- # true 00:31:24.224 18:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:31:24.224 18:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:31:24.224 18:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:31:24.224 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:31:24.224 18:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:31:24.224 18:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:31:24.224 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:31:24.224 18:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:31:24.224 18:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:31:24.224 18:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:31:24.224 18:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:31:24.224 18:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:31:24.224 18:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:31:24.224 18:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:31:24.224 18:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:31:24.224 18:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:31:24.224 18:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:31:24.493 18:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:31:24.493 18:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:31:24.493 18:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:31:24.493 18:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:31:24.493 18:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:31:24.493 18:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:31:24.493 18:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:31:24.493 18:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:31:24.493 18:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:31:24.494 18:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:31:24.494 18:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:31:24.494 18:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:31:24.494 18:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:31:24.494 18:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:31:24.494 18:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:31:24.494 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:24.494 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:31:24.494 00:31:24.494 --- 10.0.0.2 ping statistics --- 00:31:24.494 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:24.494 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:31:24.494 18:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:31:24.494 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:31:24.494 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:31:24.494 00:31:24.494 --- 10.0.0.3 ping statistics --- 00:31:24.494 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:24.494 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:31:24.494 18:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:31:24.494 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:24.494 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:31:24.494 00:31:24.494 --- 10.0.0.1 ping statistics --- 00:31:24.494 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:24.494 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:31:24.494 18:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:24.494 18:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@433 -- # return 0 00:31:24.494 18:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:24.494 18:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:24.494 18:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:24.494 18:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:24.494 18:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:24.494 18:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:24.494 18:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:24.494 18:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:31:24.494 18:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:24.494 18:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:24.494 18:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:24.494 18:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=102760 00:31:24.494 18:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:31:24.494 18:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 102760 00:31:24.494 18:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 102760 ']' 00:31:24.494 18:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:24.494 18:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:24.494 18:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:24.494 18:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:24.494 18:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:25.431 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:25.431 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:31:25.431 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:25.431 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:25.431 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:25.691 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:25.691 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:31:25.691 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:31:25.691 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:25.691 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:25.691 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:25.691 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:31:25.691 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:31:25.691 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:31:25.691 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=4f5254c0cba0f259895de44c6fcf4e07 00:31:25.691 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:31:25.691 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.eY3 00:31:25.691 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 4f5254c0cba0f259895de44c6fcf4e07 0 00:31:25.691 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 4f5254c0cba0f259895de44c6fcf4e07 0 00:31:25.691 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:25.691 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:25.691 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=4f5254c0cba0f259895de44c6fcf4e07 00:31:25.691 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:31:25.691 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:25.691 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.eY3 00:31:25.691 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.eY3 00:31:25.691 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.eY3 00:31:25.691 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:31:25.691 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:25.691 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:25.691 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:25.691 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:31:25.691 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:31:25.691 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:31:25.691 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=6db6ce8f1a9cb2adf7a0de6cd855e2075fb932be04222e1235f9efd7bb91941a 00:31:25.691 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:31:25.691 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.kdv 00:31:25.691 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 6db6ce8f1a9cb2adf7a0de6cd855e2075fb932be04222e1235f9efd7bb91941a 3 00:31:25.691 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 6db6ce8f1a9cb2adf7a0de6cd855e2075fb932be04222e1235f9efd7bb91941a 3 00:31:25.691 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:25.691 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:25.691 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=6db6ce8f1a9cb2adf7a0de6cd855e2075fb932be04222e1235f9efd7bb91941a 00:31:25.691 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:31:25.691 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:25.691 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.kdv 00:31:25.691 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.kdv 00:31:25.691 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.kdv 00:31:25.691 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:31:25.691 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:25.691 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:25.691 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:25.691 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:31:25.691 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:31:25.691 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:31:25.691 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=5810d4443543ae9952db09d730bb7893ea339156fc66209b 00:31:25.692 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:31:25.692 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.sAP 00:31:25.692 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 5810d4443543ae9952db09d730bb7893ea339156fc66209b 0 00:31:25.692 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 5810d4443543ae9952db09d730bb7893ea339156fc66209b 0 00:31:25.692 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:25.692 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:25.692 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=5810d4443543ae9952db09d730bb7893ea339156fc66209b 00:31:25.692 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:31:25.692 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:25.692 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.sAP 00:31:25.692 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.sAP 00:31:25.692 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.sAP 00:31:25.692 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:31:25.692 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:25.692 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:25.692 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:25.692 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:31:25.692 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:31:25.692 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:31:25.692 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=2480c2b8ed59b9af2bbd7d1e9ee407d5b620de8202e0df2f 00:31:25.692 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:31:25.692 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.kgR 00:31:25.692 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 2480c2b8ed59b9af2bbd7d1e9ee407d5b620de8202e0df2f 2 00:31:25.692 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 2480c2b8ed59b9af2bbd7d1e9ee407d5b620de8202e0df2f 2 00:31:25.692 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:25.692 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:25.692 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=2480c2b8ed59b9af2bbd7d1e9ee407d5b620de8202e0df2f 00:31:25.692 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:31:25.692 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:25.952 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.kgR 00:31:25.952 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.kgR 00:31:25.952 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.kgR 00:31:25.952 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:31:25.952 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:25.952 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:25.952 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:25.952 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:31:25.952 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:31:25.952 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:31:25.952 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=58ffb23a38304421393fabb6a6289c57 00:31:25.952 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:31:25.952 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.yxx 00:31:25.952 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 58ffb23a38304421393fabb6a6289c57 1 00:31:25.952 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 58ffb23a38304421393fabb6a6289c57 1 00:31:25.952 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:25.952 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:25.952 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=58ffb23a38304421393fabb6a6289c57 00:31:25.952 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:31:25.952 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:25.952 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.yxx 00:31:25.953 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.yxx 00:31:25.953 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.yxx 00:31:25.953 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:31:25.953 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:25.953 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:25.953 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:25.953 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:31:25.953 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:31:25.953 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:31:25.953 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=016d769be0c9db84816691e73a1d1115 00:31:25.953 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:31:25.953 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.kI7 00:31:25.953 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 016d769be0c9db84816691e73a1d1115 1 00:31:25.953 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 016d769be0c9db84816691e73a1d1115 1 00:31:25.953 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:25.953 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:25.953 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=016d769be0c9db84816691e73a1d1115 00:31:25.953 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:31:25.953 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:25.953 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.kI7 00:31:25.953 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.kI7 00:31:25.953 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.kI7 00:31:25.953 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:31:25.953 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:25.953 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:25.953 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:25.953 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:31:25.953 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:31:25.953 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:31:25.953 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=1b396b7c82adeea94c5fb24343b8e4b65cae1d8e470c7380 00:31:25.953 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:31:25.953 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Zq9 00:31:25.953 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 1b396b7c82adeea94c5fb24343b8e4b65cae1d8e470c7380 2 00:31:25.953 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 1b396b7c82adeea94c5fb24343b8e4b65cae1d8e470c7380 2 00:31:25.953 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:25.953 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:25.953 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=1b396b7c82adeea94c5fb24343b8e4b65cae1d8e470c7380 00:31:25.953 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:31:25.953 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:25.953 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Zq9 00:31:25.953 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Zq9 00:31:25.953 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.Zq9 00:31:25.953 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:31:25.953 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:25.953 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:25.953 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:25.953 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:31:25.953 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:31:25.953 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:31:25.953 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=c0d54fa1800edf12ab03572804172c92 00:31:25.953 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:31:25.953 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.Xxt 00:31:25.953 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key c0d54fa1800edf12ab03572804172c92 0 00:31:25.953 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 c0d54fa1800edf12ab03572804172c92 0 00:31:25.953 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:25.953 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:25.953 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=c0d54fa1800edf12ab03572804172c92 00:31:25.953 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:31:25.953 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:26.213 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.Xxt 00:31:26.213 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.Xxt 00:31:26.213 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.Xxt 00:31:26.213 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:31:26.213 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:26.213 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:26.213 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:26.213 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:31:26.213 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:31:26.213 18:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:31:26.213 18:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=eb11c6aaa6e52edbcb8f1b05048566b3296208ba04f21ae897a669f342c2a982 00:31:26.213 18:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:31:26.213 18:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.Rnb 00:31:26.213 18:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key eb11c6aaa6e52edbcb8f1b05048566b3296208ba04f21ae897a669f342c2a982 3 00:31:26.213 18:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 eb11c6aaa6e52edbcb8f1b05048566b3296208ba04f21ae897a669f342c2a982 3 00:31:26.213 18:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:26.213 18:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:26.213 18:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=eb11c6aaa6e52edbcb8f1b05048566b3296208ba04f21ae897a669f342c2a982 00:31:26.213 18:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:31:26.213 18:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:26.213 18:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.Rnb 00:31:26.213 18:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.Rnb 00:31:26.213 18:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.Rnb 00:31:26.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:26.213 18:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:31:26.213 18:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 102760 00:31:26.213 18:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 102760 ']' 00:31:26.213 18:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:26.213 18:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:26.213 18:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:26.213 18:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:26.213 18:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:26.472 18:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:26.472 18:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:31:26.472 18:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:31:26.472 18:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.eY3 00:31:26.472 18:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:26.472 18:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:26.472 18:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:26.472 18:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.kdv ]] 00:31:26.472 18:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.kdv 00:31:26.472 18:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:26.472 18:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:26.473 18:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:26.473 18:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:31:26.473 18:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.sAP 00:31:26.473 18:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:26.473 18:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:26.473 18:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:26.473 18:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.kgR ]] 00:31:26.473 18:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.kgR 00:31:26.473 18:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:26.473 18:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:26.473 18:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:26.473 18:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:31:26.473 18:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.yxx 00:31:26.473 18:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:26.473 18:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:26.473 18:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:26.473 18:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.kI7 ]] 00:31:26.473 18:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.kI7 00:31:26.473 18:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:26.473 18:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:26.473 18:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:26.473 18:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:31:26.473 18:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.Zq9 00:31:26.473 18:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:26.473 18:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:26.473 18:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:26.473 18:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.Xxt ]] 00:31:26.473 18:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.Xxt 00:31:26.473 18:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:26.473 18:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:26.473 18:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:26.473 18:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:31:26.473 18:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.Rnb 00:31:26.473 18:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:26.473 18:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:26.473 18:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:26.473 18:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:31:26.473 18:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:31:26.473 18:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:31:26.473 18:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:26.473 18:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:26.473 18:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:26.473 18:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:26.473 18:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:26.473 18:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:26.473 18:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:26.473 18:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:26.473 18:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:26.473 18:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:26.473 18:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:31:26.473 18:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:31:26.473 18:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:31:26.473 18:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:31:26.473 18:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:31:26.473 18:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:31:26.473 18:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:31:26.473 18:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:31:26.473 18:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:31:26.473 18:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:31:26.473 18:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:31:27.041 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:27.041 Waiting for block devices as requested 00:31:27.041 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:31:27.041 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:31:27.609 18:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:31:27.609 18:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:31:27.609 18:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:31:27.609 18:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:31:27.609 18:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:31:27.609 18:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:31:27.609 18:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:31:27.609 18:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:31:27.609 18:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:31:27.609 No valid GPT data, bailing 00:31:27.609 18:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:31:27.609 18:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:31:27.609 18:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:31:27.609 18:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:31:27.609 18:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:31:27.609 18:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:31:27.609 18:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:31:27.868 18:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:31:27.868 18:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:31:27.868 18:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:31:27.868 18:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:31:27.868 18:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:31:27.868 18:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:31:27.868 No valid GPT data, bailing 00:31:27.868 18:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:31:27.868 18:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:31:27.868 18:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:31:27.868 18:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:31:27.868 18:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:31:27.868 18:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:31:27.868 18:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:31:27.868 18:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:31:27.868 18:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:31:27.868 18:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:31:27.868 18:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:31:27.868 18:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:31:27.868 18:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:31:27.868 No valid GPT data, bailing 00:31:27.869 18:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:31:27.869 18:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:31:27.869 18:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:31:27.869 18:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:31:27.869 18:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:31:27.869 18:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:31:27.869 18:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:31:27.869 18:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:31:27.869 18:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:31:27.869 18:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:31:27.869 18:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:31:27.869 18:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:31:27.869 18:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:31:27.869 No valid GPT data, bailing 00:31:27.869 18:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:31:27.869 18:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:31:27.869 18:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:31:27.869 18:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:31:27.869 18:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:31:27.869 18:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:31:27.869 18:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:31:27.869 18:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:31:27.869 18:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:31:27.869 18:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:31:27.869 18:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:31:27.869 18:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:31:27.869 18:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:31:27.869 18:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:31:27.869 18:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:31:27.869 18:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:31:27.869 18:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:31:27.869 18:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --hostid=0b8484e2-e129-4a11-8748-0b3c728771da -a 10.0.0.1 -t tcp -s 4420 00:31:28.128 00:31:28.128 Discovery Log Number of Records 2, Generation counter 2 00:31:28.128 =====Discovery Log Entry 0====== 00:31:28.128 trtype: tcp 00:31:28.128 adrfam: ipv4 00:31:28.128 subtype: current discovery subsystem 00:31:28.128 treq: not specified, sq flow control disable supported 00:31:28.128 portid: 1 00:31:28.128 trsvcid: 4420 00:31:28.128 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:31:28.128 traddr: 10.0.0.1 00:31:28.128 eflags: none 00:31:28.128 sectype: none 00:31:28.128 =====Discovery Log Entry 1====== 00:31:28.128 trtype: tcp 00:31:28.128 adrfam: ipv4 00:31:28.128 subtype: nvme subsystem 00:31:28.128 treq: not specified, sq flow control disable supported 00:31:28.128 portid: 1 00:31:28.128 trsvcid: 4420 00:31:28.128 subnqn: nqn.2024-02.io.spdk:cnode0 00:31:28.128 traddr: 10.0.0.1 00:31:28.128 eflags: none 00:31:28.128 sectype: none 00:31:28.128 18:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:31:28.128 18:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:31:28.128 18:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:31:28.128 18:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:31:28.128 18:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:28.128 18:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:28.128 18:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:28.128 18:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:28.128 18:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTgxMGQ0NDQzNTQzYWU5OTUyZGIwOWQ3MzBiYjc4OTNlYTMzOTE1NmZjNjYyMDlixkJ1jw==: 00:31:28.128 18:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjQ4MGMyYjhlZDU5YjlhZjJiYmQ3ZDFlOWVlNDA3ZDViNjIwZGU4MjAyZTBkZjJm1ti+3A==: 00:31:28.128 18:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:28.128 18:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:28.128 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTgxMGQ0NDQzNTQzYWU5OTUyZGIwOWQ3MzBiYjc4OTNlYTMzOTE1NmZjNjYyMDlixkJ1jw==: 00:31:28.128 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjQ4MGMyYjhlZDU5YjlhZjJiYmQ3ZDFlOWVlNDA3ZDViNjIwZGU4MjAyZTBkZjJm1ti+3A==: ]] 00:31:28.129 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjQ4MGMyYjhlZDU5YjlhZjJiYmQ3ZDFlOWVlNDA3ZDViNjIwZGU4MjAyZTBkZjJm1ti+3A==: 00:31:28.129 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:31:28.129 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:31:28.129 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:31:28.129 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:31:28.129 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:31:28.129 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:28.129 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:31:28.129 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:31:28.129 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:28.129 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:28.129 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:31:28.129 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:28.129 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:28.129 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:28.129 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:28.129 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:28.129 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:28.129 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:28.129 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:28.129 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:28.129 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:28.129 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:28.129 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:28.129 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:28.129 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:28.129 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:28.129 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:28.129 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:28.129 nvme0n1 00:31:28.129 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:28.388 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:28.389 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:28.389 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:28.389 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:28.389 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:28.389 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:28.389 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:28.389 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:28.389 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:28.389 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:28.389 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:31:28.389 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:28.389 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:28.389 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:31:28.389 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:28.389 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:28.389 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:28.389 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:28.389 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGY1MjU0YzBjYmEwZjI1OTg5NWRlNDRjNmZjZjRlMDcVejsF: 00:31:28.389 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmRiNmNlOGYxYTljYjJhZGY3YTBkZTZjZDg1NWUyMDc1ZmI5MzJiZTA0MjIyZTEyMzVmOWVmZDdiYjkxOTQxYRdZHQ4=: 00:31:28.389 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:28.389 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:28.389 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGY1MjU0YzBjYmEwZjI1OTg5NWRlNDRjNmZjZjRlMDcVejsF: 00:31:28.389 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmRiNmNlOGYxYTljYjJhZGY3YTBkZTZjZDg1NWUyMDc1ZmI5MzJiZTA0MjIyZTEyMzVmOWVmZDdiYjkxOTQxYRdZHQ4=: ]] 00:31:28.389 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmRiNmNlOGYxYTljYjJhZGY3YTBkZTZjZDg1NWUyMDc1ZmI5MzJiZTA0MjIyZTEyMzVmOWVmZDdiYjkxOTQxYRdZHQ4=: 00:31:28.389 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:31:28.389 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:28.389 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:28.389 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:28.389 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:28.389 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:28.389 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:28.389 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:28.389 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:28.389 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:28.389 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:28.389 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:28.389 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:28.389 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:28.389 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:28.389 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:28.389 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:28.389 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:28.389 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:28.389 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:28.389 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:28.389 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:28.389 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:28.389 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:28.389 nvme0n1 00:31:28.389 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:28.389 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:28.389 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:28.389 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:28.389 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:28.389 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:28.389 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:28.389 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:28.389 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:28.389 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:28.389 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:28.389 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:28.389 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:31:28.389 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:28.389 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:28.389 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:28.389 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:28.389 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTgxMGQ0NDQzNTQzYWU5OTUyZGIwOWQ3MzBiYjc4OTNlYTMzOTE1NmZjNjYyMDlixkJ1jw==: 00:31:28.389 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjQ4MGMyYjhlZDU5YjlhZjJiYmQ3ZDFlOWVlNDA3ZDViNjIwZGU4MjAyZTBkZjJm1ti+3A==: 00:31:28.389 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:28.389 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:28.389 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTgxMGQ0NDQzNTQzYWU5OTUyZGIwOWQ3MzBiYjc4OTNlYTMzOTE1NmZjNjYyMDlixkJ1jw==: 00:31:28.389 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjQ4MGMyYjhlZDU5YjlhZjJiYmQ3ZDFlOWVlNDA3ZDViNjIwZGU4MjAyZTBkZjJm1ti+3A==: ]] 00:31:28.389 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjQ4MGMyYjhlZDU5YjlhZjJiYmQ3ZDFlOWVlNDA3ZDViNjIwZGU4MjAyZTBkZjJm1ti+3A==: 00:31:28.389 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:31:28.389 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:28.389 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:28.389 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:28.389 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:28.389 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:28.389 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:28.389 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:28.389 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:28.648 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:28.648 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:28.648 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:28.648 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:28.648 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:28.648 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:28.648 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:28.648 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:28.648 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:28.648 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:28.648 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:28.649 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:28.649 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:28.649 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:28.649 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:28.649 nvme0n1 00:31:28.649 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:28.649 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:28.649 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:28.649 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:28.649 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:28.649 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:28.649 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:28.649 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:28.649 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:28.649 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:28.649 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:28.649 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:28.649 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:31:28.649 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:28.649 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:28.649 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:28.649 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:28.649 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NThmZmIyM2EzODMwNDQyMTM5M2ZhYmI2YTYyODljNTcqEzE7: 00:31:28.649 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDE2ZDc2OWJlMGM5ZGI4NDgxNjY5MWU3M2ExZDExMTVpVChV: 00:31:28.649 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:28.649 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:28.649 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NThmZmIyM2EzODMwNDQyMTM5M2ZhYmI2YTYyODljNTcqEzE7: 00:31:28.649 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDE2ZDc2OWJlMGM5ZGI4NDgxNjY5MWU3M2ExZDExMTVpVChV: ]] 00:31:28.649 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDE2ZDc2OWJlMGM5ZGI4NDgxNjY5MWU3M2ExZDExMTVpVChV: 00:31:28.649 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:31:28.649 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:28.649 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:28.649 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:28.649 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:28.649 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:28.649 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:28.649 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:28.649 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:28.649 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:28.649 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:28.649 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:28.649 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:28.649 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:28.649 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:28.649 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:28.649 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:28.649 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:28.649 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:28.649 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:28.649 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:28.649 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:28.649 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:28.649 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:28.908 nvme0n1 00:31:28.908 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:28.908 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:28.908 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:28.908 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:28.908 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:28.908 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:28.908 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:28.908 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:28.908 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:28.908 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:28.908 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:28.908 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:28.908 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:31:28.908 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:28.908 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:28.908 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:28.908 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:28.908 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWIzOTZiN2M4MmFkZWVhOTRjNWZiMjQzNDNiOGU0YjY1Y2FlMWQ4ZTQ3MGM3MzgwcCPSTg==: 00:31:28.908 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzBkNTRmYTE4MDBlZGYxMmFiMDM1NzI4MDQxNzJjOTL5NgzA: 00:31:28.908 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:28.908 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:28.908 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWIzOTZiN2M4MmFkZWVhOTRjNWZiMjQzNDNiOGU0YjY1Y2FlMWQ4ZTQ3MGM3MzgwcCPSTg==: 00:31:28.908 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzBkNTRmYTE4MDBlZGYxMmFiMDM1NzI4MDQxNzJjOTL5NgzA: ]] 00:31:28.908 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzBkNTRmYTE4MDBlZGYxMmFiMDM1NzI4MDQxNzJjOTL5NgzA: 00:31:28.908 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:31:28.908 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:28.909 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:28.909 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:28.909 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:28.909 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:28.909 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:28.909 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:28.909 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:28.909 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:28.909 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:28.909 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:28.909 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:28.909 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:28.909 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:28.909 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:28.909 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:28.909 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:28.909 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:28.909 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:28.909 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:28.909 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:28.909 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:28.909 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:28.909 nvme0n1 00:31:28.909 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:28.909 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:28.909 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:28.909 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:28.909 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:28.909 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:28.909 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:28.909 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:28.909 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:28.909 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:29.168 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:29.168 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:29.168 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:31:29.168 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:29.168 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:29.168 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:29.168 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:29.168 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWIxMWM2YWFhNmU1MmVkYmNiOGYxYjA1MDQ4NTY2YjMyOTYyMDhiYTA0ZjIxYWU4OTdhNjY5ZjM0MmMyYTk4MkOR7WI=: 00:31:29.168 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:29.168 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:29.168 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:29.168 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWIxMWM2YWFhNmU1MmVkYmNiOGYxYjA1MDQ4NTY2YjMyOTYyMDhiYTA0ZjIxYWU4OTdhNjY5ZjM0MmMyYTk4MkOR7WI=: 00:31:29.168 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:29.168 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:31:29.168 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:29.168 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:29.168 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:29.168 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:29.168 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:29.168 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:29.168 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:29.168 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:29.168 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:29.168 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:29.168 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:29.168 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:29.168 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:29.168 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:29.168 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:29.168 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:29.168 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:29.168 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:29.168 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:29.168 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:29.168 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:29.168 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:29.168 18:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:29.168 nvme0n1 00:31:29.168 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:29.168 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:29.168 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:29.168 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:29.168 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:29.168 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:29.168 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:29.168 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:29.168 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:29.168 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:29.168 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:29.168 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:29.168 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:29.168 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:31:29.169 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:29.169 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:29.169 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:29.169 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:29.169 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGY1MjU0YzBjYmEwZjI1OTg5NWRlNDRjNmZjZjRlMDcVejsF: 00:31:29.169 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmRiNmNlOGYxYTljYjJhZGY3YTBkZTZjZDg1NWUyMDc1ZmI5MzJiZTA0MjIyZTEyMzVmOWVmZDdiYjkxOTQxYRdZHQ4=: 00:31:29.169 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:29.169 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:29.428 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGY1MjU0YzBjYmEwZjI1OTg5NWRlNDRjNmZjZjRlMDcVejsF: 00:31:29.428 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmRiNmNlOGYxYTljYjJhZGY3YTBkZTZjZDg1NWUyMDc1ZmI5MzJiZTA0MjIyZTEyMzVmOWVmZDdiYjkxOTQxYRdZHQ4=: ]] 00:31:29.428 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmRiNmNlOGYxYTljYjJhZGY3YTBkZTZjZDg1NWUyMDc1ZmI5MzJiZTA0MjIyZTEyMzVmOWVmZDdiYjkxOTQxYRdZHQ4=: 00:31:29.428 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:31:29.428 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:29.428 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:29.428 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:29.428 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:29.428 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:29.428 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:31:29.428 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:29.428 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:29.428 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:29.428 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:29.428 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:29.428 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:29.428 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:29.428 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:29.428 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:29.428 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:29.428 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:29.428 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:29.428 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:29.428 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:29.428 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:29.428 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:29.428 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:29.687 nvme0n1 00:31:29.687 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:29.687 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:29.687 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:29.687 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:29.687 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:29.687 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:29.687 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:29.687 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:29.687 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:29.687 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:29.687 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:29.687 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:29.687 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:31:29.687 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:29.687 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:29.687 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:29.687 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:29.687 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTgxMGQ0NDQzNTQzYWU5OTUyZGIwOWQ3MzBiYjc4OTNlYTMzOTE1NmZjNjYyMDlixkJ1jw==: 00:31:29.687 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjQ4MGMyYjhlZDU5YjlhZjJiYmQ3ZDFlOWVlNDA3ZDViNjIwZGU4MjAyZTBkZjJm1ti+3A==: 00:31:29.687 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:29.687 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:29.687 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTgxMGQ0NDQzNTQzYWU5OTUyZGIwOWQ3MzBiYjc4OTNlYTMzOTE1NmZjNjYyMDlixkJ1jw==: 00:31:29.687 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjQ4MGMyYjhlZDU5YjlhZjJiYmQ3ZDFlOWVlNDA3ZDViNjIwZGU4MjAyZTBkZjJm1ti+3A==: ]] 00:31:29.687 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjQ4MGMyYjhlZDU5YjlhZjJiYmQ3ZDFlOWVlNDA3ZDViNjIwZGU4MjAyZTBkZjJm1ti+3A==: 00:31:29.687 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:31:29.687 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:29.687 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:29.687 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:29.687 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:29.687 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:29.687 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:31:29.687 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:29.687 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:29.687 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:29.687 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:29.687 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:29.687 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:29.687 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:29.687 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:29.687 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:29.687 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:29.687 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:29.687 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:29.687 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:29.687 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:29.687 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:29.687 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:29.687 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:29.946 nvme0n1 00:31:29.946 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:29.947 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:29.947 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:29.947 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:29.947 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:29.947 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:29.947 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:29.947 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:29.947 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:29.947 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:29.947 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:29.947 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:29.947 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:31:29.947 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:29.947 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:29.947 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:29.947 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:29.947 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NThmZmIyM2EzODMwNDQyMTM5M2ZhYmI2YTYyODljNTcqEzE7: 00:31:29.947 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDE2ZDc2OWJlMGM5ZGI4NDgxNjY5MWU3M2ExZDExMTVpVChV: 00:31:29.947 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:29.947 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:29.947 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NThmZmIyM2EzODMwNDQyMTM5M2ZhYmI2YTYyODljNTcqEzE7: 00:31:29.947 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDE2ZDc2OWJlMGM5ZGI4NDgxNjY5MWU3M2ExZDExMTVpVChV: ]] 00:31:29.947 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDE2ZDc2OWJlMGM5ZGI4NDgxNjY5MWU3M2ExZDExMTVpVChV: 00:31:29.947 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:31:29.947 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:29.947 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:29.947 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:29.947 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:29.947 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:29.947 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:31:29.947 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:29.947 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:29.947 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:29.947 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:29.947 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:29.947 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:29.947 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:29.947 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:29.947 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:29.947 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:29.947 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:29.947 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:29.947 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:29.947 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:29.947 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:29.947 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:29.947 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:29.947 nvme0n1 00:31:29.947 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:29.947 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:29.947 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:29.947 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:29.947 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:29.947 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.207 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:30.207 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:30.207 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.207 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:30.207 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.207 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:30.207 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:31:30.207 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:30.207 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:30.207 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:30.207 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:30.207 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWIzOTZiN2M4MmFkZWVhOTRjNWZiMjQzNDNiOGU0YjY1Y2FlMWQ4ZTQ3MGM3MzgwcCPSTg==: 00:31:30.207 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzBkNTRmYTE4MDBlZGYxMmFiMDM1NzI4MDQxNzJjOTL5NgzA: 00:31:30.207 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:30.207 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:30.207 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWIzOTZiN2M4MmFkZWVhOTRjNWZiMjQzNDNiOGU0YjY1Y2FlMWQ4ZTQ3MGM3MzgwcCPSTg==: 00:31:30.207 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzBkNTRmYTE4MDBlZGYxMmFiMDM1NzI4MDQxNzJjOTL5NgzA: ]] 00:31:30.207 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzBkNTRmYTE4MDBlZGYxMmFiMDM1NzI4MDQxNzJjOTL5NgzA: 00:31:30.207 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:31:30.207 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:30.207 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:30.207 18:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:30.207 18:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:30.207 18:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:30.207 18:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:31:30.207 18:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.207 18:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:30.207 18:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.207 18:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:30.207 18:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:30.207 18:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:30.207 18:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:30.207 18:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:30.207 18:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:30.207 18:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:30.207 18:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:30.207 18:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:30.207 18:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:30.207 18:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:30.207 18:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:30.207 18:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.207 18:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:30.207 nvme0n1 00:31:30.207 18:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.207 18:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:30.207 18:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:30.207 18:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.207 18:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:30.207 18:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.207 18:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:30.207 18:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:30.207 18:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.207 18:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:30.207 18:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.207 18:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:30.207 18:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:31:30.207 18:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:30.207 18:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:30.207 18:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:30.207 18:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:30.207 18:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWIxMWM2YWFhNmU1MmVkYmNiOGYxYjA1MDQ4NTY2YjMyOTYyMDhiYTA0ZjIxYWU4OTdhNjY5ZjM0MmMyYTk4MkOR7WI=: 00:31:30.207 18:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:30.207 18:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:30.207 18:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:30.207 18:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWIxMWM2YWFhNmU1MmVkYmNiOGYxYjA1MDQ4NTY2YjMyOTYyMDhiYTA0ZjIxYWU4OTdhNjY5ZjM0MmMyYTk4MkOR7WI=: 00:31:30.207 18:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:30.207 18:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:31:30.207 18:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:30.207 18:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:30.207 18:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:30.207 18:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:30.207 18:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:30.207 18:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:31:30.207 18:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.207 18:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:30.468 18:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.468 18:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:30.468 18:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:30.468 18:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:30.468 18:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:30.468 18:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:30.468 18:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:30.468 18:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:30.468 18:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:30.468 18:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:30.468 18:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:30.468 18:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:30.468 18:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:30.468 18:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.468 18:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:30.468 nvme0n1 00:31:30.468 18:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.468 18:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:30.468 18:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:30.468 18:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.468 18:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:30.468 18:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.468 18:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:30.468 18:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:30.468 18:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.468 18:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:30.468 18:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.468 18:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:30.468 18:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:30.468 18:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:31:30.468 18:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:30.468 18:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:30.468 18:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:30.468 18:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:30.468 18:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGY1MjU0YzBjYmEwZjI1OTg5NWRlNDRjNmZjZjRlMDcVejsF: 00:31:30.468 18:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmRiNmNlOGYxYTljYjJhZGY3YTBkZTZjZDg1NWUyMDc1ZmI5MzJiZTA0MjIyZTEyMzVmOWVmZDdiYjkxOTQxYRdZHQ4=: 00:31:30.468 18:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:30.468 18:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:31.045 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGY1MjU0YzBjYmEwZjI1OTg5NWRlNDRjNmZjZjRlMDcVejsF: 00:31:31.045 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmRiNmNlOGYxYTljYjJhZGY3YTBkZTZjZDg1NWUyMDc1ZmI5MzJiZTA0MjIyZTEyMzVmOWVmZDdiYjkxOTQxYRdZHQ4=: ]] 00:31:31.045 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmRiNmNlOGYxYTljYjJhZGY3YTBkZTZjZDg1NWUyMDc1ZmI5MzJiZTA0MjIyZTEyMzVmOWVmZDdiYjkxOTQxYRdZHQ4=: 00:31:31.045 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:31:31.045 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:31.045 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:31.045 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:31.045 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:31.045 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:31.045 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:31:31.045 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:31.045 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:31.045 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:31.045 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:31.045 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:31.045 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:31.045 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:31.045 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:31.045 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:31.045 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:31.045 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:31.045 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:31.045 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:31.045 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:31.045 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:31.045 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:31.045 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:31.305 nvme0n1 00:31:31.305 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:31.305 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:31.305 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:31.305 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:31.305 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:31.305 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:31.305 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:31.305 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:31.305 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:31.305 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:31.305 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:31.305 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:31.305 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:31:31.305 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:31.305 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:31.305 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:31.305 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:31.305 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTgxMGQ0NDQzNTQzYWU5OTUyZGIwOWQ3MzBiYjc4OTNlYTMzOTE1NmZjNjYyMDlixkJ1jw==: 00:31:31.305 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjQ4MGMyYjhlZDU5YjlhZjJiYmQ3ZDFlOWVlNDA3ZDViNjIwZGU4MjAyZTBkZjJm1ti+3A==: 00:31:31.305 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:31.305 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:31.305 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTgxMGQ0NDQzNTQzYWU5OTUyZGIwOWQ3MzBiYjc4OTNlYTMzOTE1NmZjNjYyMDlixkJ1jw==: 00:31:31.305 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjQ4MGMyYjhlZDU5YjlhZjJiYmQ3ZDFlOWVlNDA3ZDViNjIwZGU4MjAyZTBkZjJm1ti+3A==: ]] 00:31:31.305 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjQ4MGMyYjhlZDU5YjlhZjJiYmQ3ZDFlOWVlNDA3ZDViNjIwZGU4MjAyZTBkZjJm1ti+3A==: 00:31:31.305 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:31:31.305 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:31.305 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:31.305 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:31.305 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:31.564 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:31.564 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:31:31.564 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:31.564 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:31.564 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:31.564 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:31.564 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:31.564 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:31.564 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:31.564 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:31.564 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:31.564 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:31.564 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:31.564 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:31.564 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:31.564 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:31.564 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:31.564 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:31.564 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:31.564 nvme0n1 00:31:31.564 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:31.564 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:31.564 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:31.564 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:31.564 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:31.564 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:31.824 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:31.824 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:31.824 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:31.824 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:31.824 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:31.824 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:31.824 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:31:31.824 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:31.824 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:31.824 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:31.824 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:31.824 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NThmZmIyM2EzODMwNDQyMTM5M2ZhYmI2YTYyODljNTcqEzE7: 00:31:31.824 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDE2ZDc2OWJlMGM5ZGI4NDgxNjY5MWU3M2ExZDExMTVpVChV: 00:31:31.824 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:31.824 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:31.824 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NThmZmIyM2EzODMwNDQyMTM5M2ZhYmI2YTYyODljNTcqEzE7: 00:31:31.824 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDE2ZDc2OWJlMGM5ZGI4NDgxNjY5MWU3M2ExZDExMTVpVChV: ]] 00:31:31.824 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDE2ZDc2OWJlMGM5ZGI4NDgxNjY5MWU3M2ExZDExMTVpVChV: 00:31:31.824 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:31:31.824 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:31.824 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:31.824 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:31.824 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:31.824 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:31.824 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:31:31.824 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:31.824 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:31.824 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:31.824 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:31.824 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:31.824 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:31.824 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:31.824 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:31.824 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:31.824 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:31.824 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:31.824 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:31.824 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:31.824 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:31.824 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:31.824 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:31.824 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:31.824 nvme0n1 00:31:31.824 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:31.824 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:31.824 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:31.825 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:31.825 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:31.825 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:32.084 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:32.084 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:32.084 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:32.084 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:32.084 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:32.084 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:32.084 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:31:32.084 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:32.084 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:32.084 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:32.084 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:32.084 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWIzOTZiN2M4MmFkZWVhOTRjNWZiMjQzNDNiOGU0YjY1Y2FlMWQ4ZTQ3MGM3MzgwcCPSTg==: 00:31:32.084 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzBkNTRmYTE4MDBlZGYxMmFiMDM1NzI4MDQxNzJjOTL5NgzA: 00:31:32.084 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:32.084 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:32.084 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWIzOTZiN2M4MmFkZWVhOTRjNWZiMjQzNDNiOGU0YjY1Y2FlMWQ4ZTQ3MGM3MzgwcCPSTg==: 00:31:32.084 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzBkNTRmYTE4MDBlZGYxMmFiMDM1NzI4MDQxNzJjOTL5NgzA: ]] 00:31:32.084 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzBkNTRmYTE4MDBlZGYxMmFiMDM1NzI4MDQxNzJjOTL5NgzA: 00:31:32.084 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:31:32.084 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:32.084 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:32.084 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:32.084 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:32.084 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:32.084 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:31:32.084 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:32.084 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:32.084 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:32.084 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:32.084 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:32.084 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:32.084 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:32.084 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:32.084 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:32.084 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:32.084 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:32.084 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:32.084 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:32.084 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:32.084 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:32.084 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:32.084 18:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:32.084 nvme0n1 00:31:32.084 18:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:32.084 18:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:32.084 18:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:32.084 18:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:32.084 18:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:32.343 18:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:32.343 18:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:32.343 18:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:32.343 18:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:32.343 18:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:32.343 18:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:32.343 18:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:32.343 18:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:31:32.343 18:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:32.343 18:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:32.343 18:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:32.343 18:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:32.343 18:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWIxMWM2YWFhNmU1MmVkYmNiOGYxYjA1MDQ4NTY2YjMyOTYyMDhiYTA0ZjIxYWU4OTdhNjY5ZjM0MmMyYTk4MkOR7WI=: 00:31:32.343 18:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:32.343 18:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:32.343 18:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:32.343 18:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWIxMWM2YWFhNmU1MmVkYmNiOGYxYjA1MDQ4NTY2YjMyOTYyMDhiYTA0ZjIxYWU4OTdhNjY5ZjM0MmMyYTk4MkOR7WI=: 00:31:32.343 18:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:32.343 18:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:31:32.343 18:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:32.343 18:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:32.343 18:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:32.343 18:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:32.344 18:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:32.344 18:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:31:32.344 18:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:32.344 18:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:32.344 18:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:32.344 18:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:32.344 18:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:32.344 18:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:32.344 18:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:32.344 18:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:32.344 18:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:32.344 18:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:32.344 18:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:32.344 18:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:32.344 18:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:32.344 18:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:32.344 18:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:32.344 18:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:32.344 18:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:32.603 nvme0n1 00:31:32.603 18:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:32.603 18:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:32.603 18:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:32.603 18:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:32.603 18:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:32.603 18:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:32.603 18:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:32.603 18:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:32.603 18:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:32.603 18:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:32.603 18:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:32.603 18:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:32.603 18:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:32.603 18:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:31:32.603 18:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:32.603 18:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:32.603 18:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:32.603 18:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:32.603 18:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGY1MjU0YzBjYmEwZjI1OTg5NWRlNDRjNmZjZjRlMDcVejsF: 00:31:32.603 18:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmRiNmNlOGYxYTljYjJhZGY3YTBkZTZjZDg1NWUyMDc1ZmI5MzJiZTA0MjIyZTEyMzVmOWVmZDdiYjkxOTQxYRdZHQ4=: 00:31:32.603 18:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:32.603 18:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:34.505 18:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGY1MjU0YzBjYmEwZjI1OTg5NWRlNDRjNmZjZjRlMDcVejsF: 00:31:34.505 18:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmRiNmNlOGYxYTljYjJhZGY3YTBkZTZjZDg1NWUyMDc1ZmI5MzJiZTA0MjIyZTEyMzVmOWVmZDdiYjkxOTQxYRdZHQ4=: ]] 00:31:34.505 18:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmRiNmNlOGYxYTljYjJhZGY3YTBkZTZjZDg1NWUyMDc1ZmI5MzJiZTA0MjIyZTEyMzVmOWVmZDdiYjkxOTQxYRdZHQ4=: 00:31:34.505 18:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:31:34.505 18:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:34.505 18:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:34.505 18:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:34.505 18:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:34.505 18:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:34.505 18:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:31:34.505 18:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:34.505 18:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:34.505 18:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:34.505 18:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:34.505 18:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:34.505 18:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:34.505 18:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:34.505 18:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:34.505 18:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:34.505 18:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:34.505 18:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:34.505 18:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:34.505 18:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:34.505 18:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:34.505 18:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:34.505 18:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:34.505 18:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:34.764 nvme0n1 00:31:34.764 18:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:34.764 18:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:34.764 18:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:34.764 18:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:34.764 18:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:34.764 18:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:34.764 18:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:34.764 18:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:34.764 18:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:34.764 18:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:34.764 18:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:34.764 18:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:34.764 18:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:31:34.764 18:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:34.764 18:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:34.764 18:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:34.764 18:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:34.764 18:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTgxMGQ0NDQzNTQzYWU5OTUyZGIwOWQ3MzBiYjc4OTNlYTMzOTE1NmZjNjYyMDlixkJ1jw==: 00:31:34.764 18:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjQ4MGMyYjhlZDU5YjlhZjJiYmQ3ZDFlOWVlNDA3ZDViNjIwZGU4MjAyZTBkZjJm1ti+3A==: 00:31:34.764 18:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:34.764 18:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:34.764 18:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTgxMGQ0NDQzNTQzYWU5OTUyZGIwOWQ3MzBiYjc4OTNlYTMzOTE1NmZjNjYyMDlixkJ1jw==: 00:31:34.764 18:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjQ4MGMyYjhlZDU5YjlhZjJiYmQ3ZDFlOWVlNDA3ZDViNjIwZGU4MjAyZTBkZjJm1ti+3A==: ]] 00:31:34.764 18:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjQ4MGMyYjhlZDU5YjlhZjJiYmQ3ZDFlOWVlNDA3ZDViNjIwZGU4MjAyZTBkZjJm1ti+3A==: 00:31:34.764 18:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:31:34.764 18:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:34.764 18:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:34.764 18:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:34.764 18:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:34.764 18:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:34.764 18:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:31:34.764 18:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:34.764 18:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:34.764 18:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:34.764 18:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:34.764 18:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:34.764 18:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:34.764 18:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:34.764 18:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:34.764 18:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:34.764 18:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:34.764 18:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:34.764 18:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:34.764 18:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:34.764 18:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:34.764 18:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:34.764 18:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:34.764 18:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:35.330 nvme0n1 00:31:35.330 18:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:35.330 18:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:35.330 18:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:35.330 18:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:35.330 18:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:35.330 18:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:35.330 18:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:35.330 18:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:35.330 18:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:35.330 18:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:35.330 18:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:35.330 18:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:35.330 18:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:31:35.330 18:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:35.330 18:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:35.330 18:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:35.330 18:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:35.330 18:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NThmZmIyM2EzODMwNDQyMTM5M2ZhYmI2YTYyODljNTcqEzE7: 00:31:35.330 18:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDE2ZDc2OWJlMGM5ZGI4NDgxNjY5MWU3M2ExZDExMTVpVChV: 00:31:35.330 18:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:35.330 18:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:35.330 18:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NThmZmIyM2EzODMwNDQyMTM5M2ZhYmI2YTYyODljNTcqEzE7: 00:31:35.330 18:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDE2ZDc2OWJlMGM5ZGI4NDgxNjY5MWU3M2ExZDExMTVpVChV: ]] 00:31:35.330 18:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDE2ZDc2OWJlMGM5ZGI4NDgxNjY5MWU3M2ExZDExMTVpVChV: 00:31:35.330 18:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:31:35.330 18:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:35.330 18:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:35.330 18:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:35.330 18:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:35.330 18:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:35.330 18:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:31:35.330 18:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:35.330 18:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:35.330 18:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:35.330 18:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:35.330 18:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:35.330 18:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:35.330 18:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:35.330 18:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:35.330 18:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:35.330 18:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:35.330 18:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:35.330 18:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:35.330 18:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:35.330 18:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:35.330 18:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:35.330 18:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:35.330 18:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:35.588 nvme0n1 00:31:35.588 18:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:35.588 18:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:35.588 18:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:35.588 18:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:35.588 18:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:35.588 18:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:35.847 18:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:35.847 18:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:35.847 18:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:35.847 18:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:35.847 18:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:35.847 18:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:35.847 18:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:31:35.847 18:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:35.847 18:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:35.847 18:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:35.847 18:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:35.847 18:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWIzOTZiN2M4MmFkZWVhOTRjNWZiMjQzNDNiOGU0YjY1Y2FlMWQ4ZTQ3MGM3MzgwcCPSTg==: 00:31:35.847 18:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzBkNTRmYTE4MDBlZGYxMmFiMDM1NzI4MDQxNzJjOTL5NgzA: 00:31:35.847 18:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:35.847 18:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:35.847 18:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWIzOTZiN2M4MmFkZWVhOTRjNWZiMjQzNDNiOGU0YjY1Y2FlMWQ4ZTQ3MGM3MzgwcCPSTg==: 00:31:35.847 18:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzBkNTRmYTE4MDBlZGYxMmFiMDM1NzI4MDQxNzJjOTL5NgzA: ]] 00:31:35.847 18:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzBkNTRmYTE4MDBlZGYxMmFiMDM1NzI4MDQxNzJjOTL5NgzA: 00:31:35.847 18:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:31:35.847 18:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:35.847 18:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:35.847 18:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:35.847 18:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:35.847 18:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:35.847 18:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:31:35.847 18:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:35.847 18:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:35.847 18:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:35.847 18:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:35.847 18:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:35.847 18:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:35.847 18:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:35.847 18:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:35.847 18:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:35.847 18:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:35.847 18:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:35.847 18:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:35.847 18:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:35.847 18:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:35.847 18:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:35.847 18:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:35.847 18:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:36.106 nvme0n1 00:31:36.106 18:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:36.106 18:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:36.106 18:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:36.106 18:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:36.106 18:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:36.106 18:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:36.106 18:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:36.106 18:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:36.106 18:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:36.106 18:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:36.106 18:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:36.106 18:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:36.106 18:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:31:36.106 18:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:36.106 18:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:36.106 18:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:36.106 18:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:36.106 18:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWIxMWM2YWFhNmU1MmVkYmNiOGYxYjA1MDQ4NTY2YjMyOTYyMDhiYTA0ZjIxYWU4OTdhNjY5ZjM0MmMyYTk4MkOR7WI=: 00:31:36.106 18:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:36.106 18:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:36.106 18:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:36.106 18:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWIxMWM2YWFhNmU1MmVkYmNiOGYxYjA1MDQ4NTY2YjMyOTYyMDhiYTA0ZjIxYWU4OTdhNjY5ZjM0MmMyYTk4MkOR7WI=: 00:31:36.106 18:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:36.106 18:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:31:36.106 18:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:36.106 18:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:36.106 18:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:36.106 18:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:36.106 18:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:36.106 18:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:31:36.106 18:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:36.106 18:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:36.106 18:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:36.106 18:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:36.106 18:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:36.106 18:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:36.106 18:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:36.106 18:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:36.106 18:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:36.106 18:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:36.106 18:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:36.106 18:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:36.106 18:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:36.106 18:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:36.106 18:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:36.106 18:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:36.106 18:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:36.674 nvme0n1 00:31:36.674 18:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:36.674 18:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:36.674 18:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:36.674 18:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:36.674 18:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:36.674 18:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:36.674 18:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:36.674 18:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:36.674 18:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:36.674 18:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:36.674 18:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:36.674 18:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:36.674 18:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:36.674 18:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:31:36.674 18:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:36.674 18:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:36.674 18:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:36.674 18:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:36.674 18:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGY1MjU0YzBjYmEwZjI1OTg5NWRlNDRjNmZjZjRlMDcVejsF: 00:31:36.674 18:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmRiNmNlOGYxYTljYjJhZGY3YTBkZTZjZDg1NWUyMDc1ZmI5MzJiZTA0MjIyZTEyMzVmOWVmZDdiYjkxOTQxYRdZHQ4=: 00:31:36.674 18:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:36.674 18:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:36.674 18:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGY1MjU0YzBjYmEwZjI1OTg5NWRlNDRjNmZjZjRlMDcVejsF: 00:31:36.674 18:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmRiNmNlOGYxYTljYjJhZGY3YTBkZTZjZDg1NWUyMDc1ZmI5MzJiZTA0MjIyZTEyMzVmOWVmZDdiYjkxOTQxYRdZHQ4=: ]] 00:31:36.674 18:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmRiNmNlOGYxYTljYjJhZGY3YTBkZTZjZDg1NWUyMDc1ZmI5MzJiZTA0MjIyZTEyMzVmOWVmZDdiYjkxOTQxYRdZHQ4=: 00:31:36.674 18:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:31:36.674 18:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:36.674 18:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:36.674 18:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:36.674 18:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:36.674 18:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:36.674 18:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:31:36.674 18:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:36.674 18:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:36.674 18:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:36.674 18:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:36.674 18:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:36.674 18:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:36.674 18:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:36.674 18:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:36.674 18:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:36.674 18:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:36.674 18:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:36.674 18:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:36.674 18:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:36.674 18:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:36.674 18:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:36.674 18:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:36.674 18:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:37.241 nvme0n1 00:31:37.241 18:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:37.241 18:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:37.241 18:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:37.241 18:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:37.241 18:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:37.241 18:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:37.241 18:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:37.241 18:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:37.241 18:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:37.241 18:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:37.241 18:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:37.241 18:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:37.241 18:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:31:37.241 18:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:37.241 18:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:37.241 18:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:37.241 18:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:37.241 18:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTgxMGQ0NDQzNTQzYWU5OTUyZGIwOWQ3MzBiYjc4OTNlYTMzOTE1NmZjNjYyMDlixkJ1jw==: 00:31:37.241 18:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjQ4MGMyYjhlZDU5YjlhZjJiYmQ3ZDFlOWVlNDA3ZDViNjIwZGU4MjAyZTBkZjJm1ti+3A==: 00:31:37.241 18:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:37.241 18:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:37.241 18:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTgxMGQ0NDQzNTQzYWU5OTUyZGIwOWQ3MzBiYjc4OTNlYTMzOTE1NmZjNjYyMDlixkJ1jw==: 00:31:37.241 18:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjQ4MGMyYjhlZDU5YjlhZjJiYmQ3ZDFlOWVlNDA3ZDViNjIwZGU4MjAyZTBkZjJm1ti+3A==: ]] 00:31:37.241 18:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjQ4MGMyYjhlZDU5YjlhZjJiYmQ3ZDFlOWVlNDA3ZDViNjIwZGU4MjAyZTBkZjJm1ti+3A==: 00:31:37.241 18:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:31:37.241 18:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:37.241 18:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:37.241 18:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:37.241 18:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:37.241 18:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:37.241 18:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:31:37.241 18:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:37.241 18:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:37.241 18:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:37.241 18:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:37.241 18:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:37.241 18:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:37.241 18:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:37.241 18:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:37.241 18:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:37.241 18:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:37.241 18:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:37.241 18:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:37.241 18:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:37.241 18:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:37.241 18:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:37.241 18:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:37.241 18:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:38.177 nvme0n1 00:31:38.177 18:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:38.177 18:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:38.177 18:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:38.177 18:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:38.177 18:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:38.177 18:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:38.177 18:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:38.177 18:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:38.177 18:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:38.177 18:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:38.177 18:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:38.177 18:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:38.177 18:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:31:38.177 18:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:38.177 18:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:38.177 18:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:38.177 18:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:38.177 18:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NThmZmIyM2EzODMwNDQyMTM5M2ZhYmI2YTYyODljNTcqEzE7: 00:31:38.177 18:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDE2ZDc2OWJlMGM5ZGI4NDgxNjY5MWU3M2ExZDExMTVpVChV: 00:31:38.177 18:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:38.177 18:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:38.177 18:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NThmZmIyM2EzODMwNDQyMTM5M2ZhYmI2YTYyODljNTcqEzE7: 00:31:38.177 18:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDE2ZDc2OWJlMGM5ZGI4NDgxNjY5MWU3M2ExZDExMTVpVChV: ]] 00:31:38.177 18:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDE2ZDc2OWJlMGM5ZGI4NDgxNjY5MWU3M2ExZDExMTVpVChV: 00:31:38.177 18:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:31:38.177 18:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:38.177 18:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:38.177 18:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:38.177 18:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:38.177 18:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:38.177 18:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:31:38.177 18:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:38.177 18:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:38.177 18:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:38.177 18:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:38.177 18:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:38.177 18:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:38.177 18:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:38.177 18:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:38.177 18:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:38.177 18:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:38.177 18:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:38.177 18:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:38.177 18:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:38.177 18:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:38.177 18:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:38.177 18:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:38.177 18:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:38.758 nvme0n1 00:31:38.758 18:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:38.758 18:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:38.758 18:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:38.758 18:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:38.758 18:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:38.758 18:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:38.758 18:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:38.758 18:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:38.758 18:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:38.758 18:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:38.758 18:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:38.758 18:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:38.758 18:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:31:38.758 18:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:38.758 18:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:38.758 18:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:38.758 18:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:38.758 18:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWIzOTZiN2M4MmFkZWVhOTRjNWZiMjQzNDNiOGU0YjY1Y2FlMWQ4ZTQ3MGM3MzgwcCPSTg==: 00:31:38.758 18:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzBkNTRmYTE4MDBlZGYxMmFiMDM1NzI4MDQxNzJjOTL5NgzA: 00:31:38.758 18:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:38.758 18:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:38.758 18:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWIzOTZiN2M4MmFkZWVhOTRjNWZiMjQzNDNiOGU0YjY1Y2FlMWQ4ZTQ3MGM3MzgwcCPSTg==: 00:31:38.758 18:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzBkNTRmYTE4MDBlZGYxMmFiMDM1NzI4MDQxNzJjOTL5NgzA: ]] 00:31:38.758 18:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzBkNTRmYTE4MDBlZGYxMmFiMDM1NzI4MDQxNzJjOTL5NgzA: 00:31:38.758 18:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:31:38.758 18:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:38.758 18:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:38.758 18:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:38.758 18:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:38.758 18:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:38.758 18:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:31:38.758 18:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:38.758 18:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:38.758 18:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:38.758 18:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:38.758 18:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:38.758 18:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:38.758 18:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:38.758 18:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:38.758 18:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:38.759 18:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:38.759 18:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:38.759 18:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:38.759 18:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:38.759 18:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:38.759 18:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:38.759 18:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:38.759 18:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.338 nvme0n1 00:31:39.338 18:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:39.338 18:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:39.338 18:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:39.338 18:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:39.338 18:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.338 18:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:39.338 18:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:39.338 18:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:39.338 18:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:39.338 18:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.338 18:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:39.338 18:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:39.338 18:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:31:39.338 18:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:39.338 18:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:39.338 18:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:39.338 18:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:39.338 18:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWIxMWM2YWFhNmU1MmVkYmNiOGYxYjA1MDQ4NTY2YjMyOTYyMDhiYTA0ZjIxYWU4OTdhNjY5ZjM0MmMyYTk4MkOR7WI=: 00:31:39.338 18:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:39.338 18:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:39.338 18:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:39.338 18:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWIxMWM2YWFhNmU1MmVkYmNiOGYxYjA1MDQ4NTY2YjMyOTYyMDhiYTA0ZjIxYWU4OTdhNjY5ZjM0MmMyYTk4MkOR7WI=: 00:31:39.338 18:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:39.338 18:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:31:39.338 18:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:39.338 18:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:39.338 18:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:39.338 18:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:39.338 18:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:39.338 18:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:31:39.338 18:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:39.338 18:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.338 18:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:39.338 18:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:39.338 18:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:39.338 18:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:39.338 18:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:39.338 18:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:39.339 18:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:39.339 18:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:39.339 18:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:39.339 18:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:39.339 18:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:39.339 18:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:39.339 18:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:39.339 18:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:39.339 18:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.277 nvme0n1 00:31:40.277 18:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:40.277 18:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:40.277 18:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:40.277 18:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:40.277 18:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.277 18:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:40.277 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:40.277 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:40.277 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:40.277 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.277 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:40.277 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:31:40.277 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:40.277 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:40.277 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:31:40.277 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:40.277 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:40.277 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:40.277 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:40.277 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGY1MjU0YzBjYmEwZjI1OTg5NWRlNDRjNmZjZjRlMDcVejsF: 00:31:40.277 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmRiNmNlOGYxYTljYjJhZGY3YTBkZTZjZDg1NWUyMDc1ZmI5MzJiZTA0MjIyZTEyMzVmOWVmZDdiYjkxOTQxYRdZHQ4=: 00:31:40.277 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:40.277 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:40.277 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGY1MjU0YzBjYmEwZjI1OTg5NWRlNDRjNmZjZjRlMDcVejsF: 00:31:40.277 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmRiNmNlOGYxYTljYjJhZGY3YTBkZTZjZDg1NWUyMDc1ZmI5MzJiZTA0MjIyZTEyMzVmOWVmZDdiYjkxOTQxYRdZHQ4=: ]] 00:31:40.277 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmRiNmNlOGYxYTljYjJhZGY3YTBkZTZjZDg1NWUyMDc1ZmI5MzJiZTA0MjIyZTEyMzVmOWVmZDdiYjkxOTQxYRdZHQ4=: 00:31:40.277 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:31:40.277 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:40.277 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:40.277 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:40.277 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:40.277 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:40.277 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:31:40.277 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:40.277 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.277 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:40.277 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:40.277 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:40.277 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:40.277 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:40.277 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:40.277 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:40.277 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:40.277 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:40.277 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:40.277 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:40.277 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:40.277 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:40.277 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:40.277 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.277 nvme0n1 00:31:40.277 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:40.277 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:40.278 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:40.278 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:40.278 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.278 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:40.278 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:40.278 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:40.278 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:40.278 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.278 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:40.278 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:40.278 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:31:40.278 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:40.278 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:40.278 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:40.278 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:40.278 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTgxMGQ0NDQzNTQzYWU5OTUyZGIwOWQ3MzBiYjc4OTNlYTMzOTE1NmZjNjYyMDlixkJ1jw==: 00:31:40.278 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjQ4MGMyYjhlZDU5YjlhZjJiYmQ3ZDFlOWVlNDA3ZDViNjIwZGU4MjAyZTBkZjJm1ti+3A==: 00:31:40.278 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:40.278 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:40.278 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTgxMGQ0NDQzNTQzYWU5OTUyZGIwOWQ3MzBiYjc4OTNlYTMzOTE1NmZjNjYyMDlixkJ1jw==: 00:31:40.278 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjQ4MGMyYjhlZDU5YjlhZjJiYmQ3ZDFlOWVlNDA3ZDViNjIwZGU4MjAyZTBkZjJm1ti+3A==: ]] 00:31:40.278 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjQ4MGMyYjhlZDU5YjlhZjJiYmQ3ZDFlOWVlNDA3ZDViNjIwZGU4MjAyZTBkZjJm1ti+3A==: 00:31:40.278 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:31:40.278 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:40.278 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:40.278 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:40.278 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:40.278 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:40.278 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:31:40.278 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:40.278 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.278 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:40.278 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:40.278 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:40.278 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:40.278 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:40.278 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:40.278 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:40.278 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:40.278 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:40.278 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:40.278 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:40.278 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:40.278 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:40.278 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:40.278 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.541 nvme0n1 00:31:40.541 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:40.541 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:40.541 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:40.541 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:40.541 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.541 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:40.541 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:40.541 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:40.541 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:40.541 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.541 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:40.541 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:40.541 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:31:40.541 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:40.541 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:40.541 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:40.541 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:40.541 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NThmZmIyM2EzODMwNDQyMTM5M2ZhYmI2YTYyODljNTcqEzE7: 00:31:40.541 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDE2ZDc2OWJlMGM5ZGI4NDgxNjY5MWU3M2ExZDExMTVpVChV: 00:31:40.541 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:40.541 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:40.541 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NThmZmIyM2EzODMwNDQyMTM5M2ZhYmI2YTYyODljNTcqEzE7: 00:31:40.541 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDE2ZDc2OWJlMGM5ZGI4NDgxNjY5MWU3M2ExZDExMTVpVChV: ]] 00:31:40.541 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDE2ZDc2OWJlMGM5ZGI4NDgxNjY5MWU3M2ExZDExMTVpVChV: 00:31:40.541 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:31:40.541 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:40.541 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:40.541 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:40.541 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:40.541 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:40.541 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:31:40.541 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:40.541 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.541 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:40.541 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:40.541 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:40.541 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:40.541 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:40.541 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:40.541 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:40.541 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:40.541 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:40.541 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:40.541 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:40.541 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:40.541 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:40.541 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:40.541 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.541 nvme0n1 00:31:40.541 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:40.541 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:40.541 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:40.541 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:40.541 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.541 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:40.802 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:40.802 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:40.802 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:40.802 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.802 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:40.802 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:40.802 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:31:40.802 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:40.802 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:40.802 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:40.802 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:40.802 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWIzOTZiN2M4MmFkZWVhOTRjNWZiMjQzNDNiOGU0YjY1Y2FlMWQ4ZTQ3MGM3MzgwcCPSTg==: 00:31:40.802 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzBkNTRmYTE4MDBlZGYxMmFiMDM1NzI4MDQxNzJjOTL5NgzA: 00:31:40.802 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:40.803 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:40.803 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWIzOTZiN2M4MmFkZWVhOTRjNWZiMjQzNDNiOGU0YjY1Y2FlMWQ4ZTQ3MGM3MzgwcCPSTg==: 00:31:40.803 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzBkNTRmYTE4MDBlZGYxMmFiMDM1NzI4MDQxNzJjOTL5NgzA: ]] 00:31:40.803 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzBkNTRmYTE4MDBlZGYxMmFiMDM1NzI4MDQxNzJjOTL5NgzA: 00:31:40.803 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:31:40.803 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:40.803 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:40.803 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:40.803 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:40.803 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:40.803 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:31:40.803 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:40.803 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.803 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:40.803 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:40.803 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:40.803 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:40.803 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:40.803 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:40.803 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:40.803 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:40.803 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:40.803 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:40.803 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:40.803 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:40.803 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:40.803 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:40.803 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.803 nvme0n1 00:31:40.803 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:40.803 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:40.803 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:40.803 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:40.803 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.803 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:40.803 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:40.803 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:40.803 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:40.803 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.803 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:40.803 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:40.803 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:31:40.803 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:40.803 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:40.803 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:40.803 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:40.803 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWIxMWM2YWFhNmU1MmVkYmNiOGYxYjA1MDQ4NTY2YjMyOTYyMDhiYTA0ZjIxYWU4OTdhNjY5ZjM0MmMyYTk4MkOR7WI=: 00:31:40.803 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:40.803 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:40.803 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:40.803 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWIxMWM2YWFhNmU1MmVkYmNiOGYxYjA1MDQ4NTY2YjMyOTYyMDhiYTA0ZjIxYWU4OTdhNjY5ZjM0MmMyYTk4MkOR7WI=: 00:31:40.803 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:40.803 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:31:40.803 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:40.803 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:40.803 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:40.803 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:40.803 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:40.803 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:31:40.803 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:40.803 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.803 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:40.803 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:40.803 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:40.803 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:40.803 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:40.803 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:40.804 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:40.804 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:40.804 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:40.804 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:40.804 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:40.804 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:40.804 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:40.804 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:40.804 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.062 nvme0n1 00:31:41.062 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:41.062 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:41.062 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:41.062 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:41.062 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.062 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:41.062 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:41.062 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:41.062 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:41.062 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.062 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:41.062 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:41.062 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:41.062 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:31:41.062 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:41.062 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:41.062 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:41.062 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:41.062 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGY1MjU0YzBjYmEwZjI1OTg5NWRlNDRjNmZjZjRlMDcVejsF: 00:31:41.062 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmRiNmNlOGYxYTljYjJhZGY3YTBkZTZjZDg1NWUyMDc1ZmI5MzJiZTA0MjIyZTEyMzVmOWVmZDdiYjkxOTQxYRdZHQ4=: 00:31:41.062 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:41.062 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:41.062 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGY1MjU0YzBjYmEwZjI1OTg5NWRlNDRjNmZjZjRlMDcVejsF: 00:31:41.062 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmRiNmNlOGYxYTljYjJhZGY3YTBkZTZjZDg1NWUyMDc1ZmI5MzJiZTA0MjIyZTEyMzVmOWVmZDdiYjkxOTQxYRdZHQ4=: ]] 00:31:41.062 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmRiNmNlOGYxYTljYjJhZGY3YTBkZTZjZDg1NWUyMDc1ZmI5MzJiZTA0MjIyZTEyMzVmOWVmZDdiYjkxOTQxYRdZHQ4=: 00:31:41.062 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:31:41.062 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:41.062 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:41.062 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:41.062 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:41.062 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:41.062 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:31:41.062 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:41.062 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.062 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:41.062 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:41.062 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:41.062 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:41.062 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:41.062 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:41.062 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:41.062 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:41.062 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:41.062 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:41.062 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:41.062 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:41.062 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:41.062 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:41.063 18:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.319 nvme0n1 00:31:41.320 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:41.320 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:41.320 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:41.320 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:41.320 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.320 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:41.320 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:41.320 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:41.320 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:41.320 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.320 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:41.320 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:41.320 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:31:41.320 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:41.320 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:41.320 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:41.320 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:41.320 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTgxMGQ0NDQzNTQzYWU5OTUyZGIwOWQ3MzBiYjc4OTNlYTMzOTE1NmZjNjYyMDlixkJ1jw==: 00:31:41.320 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjQ4MGMyYjhlZDU5YjlhZjJiYmQ3ZDFlOWVlNDA3ZDViNjIwZGU4MjAyZTBkZjJm1ti+3A==: 00:31:41.320 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:41.320 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:41.320 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTgxMGQ0NDQzNTQzYWU5OTUyZGIwOWQ3MzBiYjc4OTNlYTMzOTE1NmZjNjYyMDlixkJ1jw==: 00:31:41.320 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjQ4MGMyYjhlZDU5YjlhZjJiYmQ3ZDFlOWVlNDA3ZDViNjIwZGU4MjAyZTBkZjJm1ti+3A==: ]] 00:31:41.320 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjQ4MGMyYjhlZDU5YjlhZjJiYmQ3ZDFlOWVlNDA3ZDViNjIwZGU4MjAyZTBkZjJm1ti+3A==: 00:31:41.320 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:31:41.320 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:41.320 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:41.320 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:41.320 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:41.320 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:41.320 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:31:41.320 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:41.320 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.320 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:41.320 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:41.320 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:41.320 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:41.320 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:41.320 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:41.320 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:41.320 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:41.320 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:41.320 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:41.320 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:41.320 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:41.320 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:41.320 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:41.320 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.577 nvme0n1 00:31:41.577 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:41.577 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:41.577 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:41.577 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:41.577 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.577 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:41.577 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:41.577 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:41.577 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:41.577 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.577 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:41.577 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:41.577 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:31:41.577 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:41.577 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:41.577 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:41.577 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:41.577 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NThmZmIyM2EzODMwNDQyMTM5M2ZhYmI2YTYyODljNTcqEzE7: 00:31:41.577 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDE2ZDc2OWJlMGM5ZGI4NDgxNjY5MWU3M2ExZDExMTVpVChV: 00:31:41.577 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:41.577 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:41.577 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NThmZmIyM2EzODMwNDQyMTM5M2ZhYmI2YTYyODljNTcqEzE7: 00:31:41.577 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDE2ZDc2OWJlMGM5ZGI4NDgxNjY5MWU3M2ExZDExMTVpVChV: ]] 00:31:41.577 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDE2ZDc2OWJlMGM5ZGI4NDgxNjY5MWU3M2ExZDExMTVpVChV: 00:31:41.577 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:31:41.577 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:41.577 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:41.577 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:41.577 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:41.577 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:41.577 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:31:41.577 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:41.577 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.577 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:41.577 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:41.577 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:41.577 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:41.577 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:41.577 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:41.577 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:41.577 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:41.577 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:41.577 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:41.577 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:41.577 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:41.577 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:41.577 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:41.577 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.577 nvme0n1 00:31:41.577 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:41.577 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:41.577 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:41.577 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:41.577 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.577 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:41.836 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:41.836 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:41.836 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:41.836 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.836 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:41.836 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:41.836 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:31:41.836 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:41.836 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:41.836 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:41.836 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:41.836 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWIzOTZiN2M4MmFkZWVhOTRjNWZiMjQzNDNiOGU0YjY1Y2FlMWQ4ZTQ3MGM3MzgwcCPSTg==: 00:31:41.836 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzBkNTRmYTE4MDBlZGYxMmFiMDM1NzI4MDQxNzJjOTL5NgzA: 00:31:41.836 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:41.836 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:41.836 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWIzOTZiN2M4MmFkZWVhOTRjNWZiMjQzNDNiOGU0YjY1Y2FlMWQ4ZTQ3MGM3MzgwcCPSTg==: 00:31:41.836 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzBkNTRmYTE4MDBlZGYxMmFiMDM1NzI4MDQxNzJjOTL5NgzA: ]] 00:31:41.836 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzBkNTRmYTE4MDBlZGYxMmFiMDM1NzI4MDQxNzJjOTL5NgzA: 00:31:41.836 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:31:41.836 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:41.836 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:41.836 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:41.836 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:41.836 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:41.836 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:31:41.836 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:41.836 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.836 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:41.836 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:41.836 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:41.836 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:41.836 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:41.836 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:41.836 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:41.836 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:41.836 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:41.836 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:41.836 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:41.836 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:41.836 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:41.836 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:41.836 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.836 nvme0n1 00:31:41.836 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:41.836 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:41.836 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:41.836 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:41.836 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.836 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:41.836 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:41.836 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:41.836 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:41.836 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.836 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:41.836 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:41.836 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:31:41.836 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:41.836 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:41.836 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:41.836 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:41.836 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWIxMWM2YWFhNmU1MmVkYmNiOGYxYjA1MDQ4NTY2YjMyOTYyMDhiYTA0ZjIxYWU4OTdhNjY5ZjM0MmMyYTk4MkOR7WI=: 00:31:41.836 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:41.836 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:41.836 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:41.836 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWIxMWM2YWFhNmU1MmVkYmNiOGYxYjA1MDQ4NTY2YjMyOTYyMDhiYTA0ZjIxYWU4OTdhNjY5ZjM0MmMyYTk4MkOR7WI=: 00:31:41.836 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:41.836 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:31:41.836 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:41.836 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:41.836 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:41.836 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:41.836 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:41.836 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:31:41.836 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:41.836 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.095 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:42.095 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:42.095 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:42.095 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:42.095 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:42.095 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:42.095 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:42.095 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:42.095 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:42.095 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:42.095 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:42.095 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:42.095 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:42.095 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:42.095 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.095 nvme0n1 00:31:42.095 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:42.095 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:42.095 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:42.095 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.095 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:42.095 18:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:42.095 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:42.095 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:42.095 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:42.095 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.095 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:42.095 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:42.095 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:42.095 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:31:42.095 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:42.095 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:42.095 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:42.095 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:42.095 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGY1MjU0YzBjYmEwZjI1OTg5NWRlNDRjNmZjZjRlMDcVejsF: 00:31:42.095 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmRiNmNlOGYxYTljYjJhZGY3YTBkZTZjZDg1NWUyMDc1ZmI5MzJiZTA0MjIyZTEyMzVmOWVmZDdiYjkxOTQxYRdZHQ4=: 00:31:42.095 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:42.095 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:42.095 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGY1MjU0YzBjYmEwZjI1OTg5NWRlNDRjNmZjZjRlMDcVejsF: 00:31:42.095 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmRiNmNlOGYxYTljYjJhZGY3YTBkZTZjZDg1NWUyMDc1ZmI5MzJiZTA0MjIyZTEyMzVmOWVmZDdiYjkxOTQxYRdZHQ4=: ]] 00:31:42.095 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmRiNmNlOGYxYTljYjJhZGY3YTBkZTZjZDg1NWUyMDc1ZmI5MzJiZTA0MjIyZTEyMzVmOWVmZDdiYjkxOTQxYRdZHQ4=: 00:31:42.095 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:31:42.095 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:42.095 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:42.095 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:42.095 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:42.095 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:42.095 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:31:42.095 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:42.095 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.095 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:42.095 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:42.095 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:42.095 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:42.095 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:42.095 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:42.095 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:42.095 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:42.095 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:42.095 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:42.095 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:42.095 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:42.095 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:42.095 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:42.095 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.362 nvme0n1 00:31:42.362 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:42.362 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:42.362 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:42.362 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.362 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:42.362 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:42.362 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:42.362 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:42.362 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:42.362 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.362 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:42.362 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:42.362 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:31:42.362 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:42.362 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:42.362 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:42.362 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:42.362 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTgxMGQ0NDQzNTQzYWU5OTUyZGIwOWQ3MzBiYjc4OTNlYTMzOTE1NmZjNjYyMDlixkJ1jw==: 00:31:42.362 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjQ4MGMyYjhlZDU5YjlhZjJiYmQ3ZDFlOWVlNDA3ZDViNjIwZGU4MjAyZTBkZjJm1ti+3A==: 00:31:42.362 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:42.362 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:42.362 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTgxMGQ0NDQzNTQzYWU5OTUyZGIwOWQ3MzBiYjc4OTNlYTMzOTE1NmZjNjYyMDlixkJ1jw==: 00:31:42.362 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjQ4MGMyYjhlZDU5YjlhZjJiYmQ3ZDFlOWVlNDA3ZDViNjIwZGU4MjAyZTBkZjJm1ti+3A==: ]] 00:31:42.362 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjQ4MGMyYjhlZDU5YjlhZjJiYmQ3ZDFlOWVlNDA3ZDViNjIwZGU4MjAyZTBkZjJm1ti+3A==: 00:31:42.362 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:31:42.362 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:42.362 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:42.362 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:42.362 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:42.362 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:42.362 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:31:42.362 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:42.362 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.362 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:42.362 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:42.362 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:42.362 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:42.362 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:42.362 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:42.362 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:42.362 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:42.362 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:42.362 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:42.362 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:42.362 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:42.362 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:42.362 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:42.362 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.640 nvme0n1 00:31:42.640 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:42.640 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:42.640 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:42.640 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:42.640 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.640 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:42.640 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:42.640 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:42.640 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:42.640 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.640 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:42.640 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:42.640 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:31:42.640 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:42.640 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:42.640 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:42.640 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:42.640 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NThmZmIyM2EzODMwNDQyMTM5M2ZhYmI2YTYyODljNTcqEzE7: 00:31:42.640 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDE2ZDc2OWJlMGM5ZGI4NDgxNjY5MWU3M2ExZDExMTVpVChV: 00:31:42.640 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:42.640 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:42.640 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NThmZmIyM2EzODMwNDQyMTM5M2ZhYmI2YTYyODljNTcqEzE7: 00:31:42.640 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDE2ZDc2OWJlMGM5ZGI4NDgxNjY5MWU3M2ExZDExMTVpVChV: ]] 00:31:42.640 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDE2ZDc2OWJlMGM5ZGI4NDgxNjY5MWU3M2ExZDExMTVpVChV: 00:31:42.640 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:31:42.640 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:42.640 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:42.640 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:42.640 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:42.640 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:42.640 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:31:42.640 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:42.640 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.640 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:42.640 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:42.640 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:42.640 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:42.640 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:42.640 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:42.640 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:42.640 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:42.640 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:42.640 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:42.640 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:42.640 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:42.640 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:42.640 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:42.640 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.898 nvme0n1 00:31:42.898 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:42.898 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:42.898 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:42.898 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:42.898 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.898 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:42.898 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:42.898 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:42.898 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:42.898 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.898 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:42.898 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:42.898 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:31:42.898 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:42.898 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:42.898 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:42.898 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:42.898 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWIzOTZiN2M4MmFkZWVhOTRjNWZiMjQzNDNiOGU0YjY1Y2FlMWQ4ZTQ3MGM3MzgwcCPSTg==: 00:31:42.898 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzBkNTRmYTE4MDBlZGYxMmFiMDM1NzI4MDQxNzJjOTL5NgzA: 00:31:42.898 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:42.898 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:42.898 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWIzOTZiN2M4MmFkZWVhOTRjNWZiMjQzNDNiOGU0YjY1Y2FlMWQ4ZTQ3MGM3MzgwcCPSTg==: 00:31:42.898 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzBkNTRmYTE4MDBlZGYxMmFiMDM1NzI4MDQxNzJjOTL5NgzA: ]] 00:31:42.898 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzBkNTRmYTE4MDBlZGYxMmFiMDM1NzI4MDQxNzJjOTL5NgzA: 00:31:42.898 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:31:42.898 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:42.899 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:42.899 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:42.899 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:42.899 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:42.899 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:31:42.899 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:42.899 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.899 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:42.899 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:42.899 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:42.899 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:42.899 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:42.899 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:42.899 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:42.899 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:42.899 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:42.899 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:42.899 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:42.899 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:42.899 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:42.899 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:42.899 18:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.156 nvme0n1 00:31:43.156 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:43.156 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:43.156 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:43.156 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:43.156 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.156 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:43.156 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:43.156 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:43.156 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:43.156 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.156 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:43.156 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:43.156 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:31:43.156 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:43.156 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:43.156 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:43.156 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:43.156 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWIxMWM2YWFhNmU1MmVkYmNiOGYxYjA1MDQ4NTY2YjMyOTYyMDhiYTA0ZjIxYWU4OTdhNjY5ZjM0MmMyYTk4MkOR7WI=: 00:31:43.156 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:43.156 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:43.156 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:43.156 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWIxMWM2YWFhNmU1MmVkYmNiOGYxYjA1MDQ4NTY2YjMyOTYyMDhiYTA0ZjIxYWU4OTdhNjY5ZjM0MmMyYTk4MkOR7WI=: 00:31:43.156 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:43.156 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:31:43.157 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:43.157 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:43.157 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:43.157 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:43.157 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:43.157 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:31:43.157 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:43.157 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.157 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:43.157 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:43.157 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:43.157 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:43.157 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:43.157 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:43.157 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:43.157 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:43.157 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:43.157 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:43.157 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:43.157 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:43.157 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:43.157 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:43.157 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.415 nvme0n1 00:31:43.415 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:43.415 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:43.415 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:43.415 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.415 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:43.415 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:43.415 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:43.415 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:43.415 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:43.415 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.415 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:43.415 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:43.415 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:43.415 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:31:43.415 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:43.415 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:43.415 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:43.415 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:43.415 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGY1MjU0YzBjYmEwZjI1OTg5NWRlNDRjNmZjZjRlMDcVejsF: 00:31:43.415 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmRiNmNlOGYxYTljYjJhZGY3YTBkZTZjZDg1NWUyMDc1ZmI5MzJiZTA0MjIyZTEyMzVmOWVmZDdiYjkxOTQxYRdZHQ4=: 00:31:43.415 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:43.415 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:43.415 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGY1MjU0YzBjYmEwZjI1OTg5NWRlNDRjNmZjZjRlMDcVejsF: 00:31:43.415 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmRiNmNlOGYxYTljYjJhZGY3YTBkZTZjZDg1NWUyMDc1ZmI5MzJiZTA0MjIyZTEyMzVmOWVmZDdiYjkxOTQxYRdZHQ4=: ]] 00:31:43.415 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmRiNmNlOGYxYTljYjJhZGY3YTBkZTZjZDg1NWUyMDc1ZmI5MzJiZTA0MjIyZTEyMzVmOWVmZDdiYjkxOTQxYRdZHQ4=: 00:31:43.415 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:31:43.415 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:43.415 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:43.415 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:43.415 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:43.415 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:43.415 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:31:43.415 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:43.415 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.415 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:43.415 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:43.415 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:43.415 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:43.415 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:43.415 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:43.415 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:43.415 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:43.415 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:43.415 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:43.416 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:43.416 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:43.416 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:43.416 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:43.416 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.982 nvme0n1 00:31:43.982 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:43.982 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:43.982 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:43.982 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:43.982 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.982 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:43.982 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:43.982 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:43.982 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:43.982 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.982 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:43.982 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:43.982 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:31:43.982 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:43.982 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:43.982 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:43.982 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:43.982 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTgxMGQ0NDQzNTQzYWU5OTUyZGIwOWQ3MzBiYjc4OTNlYTMzOTE1NmZjNjYyMDlixkJ1jw==: 00:31:43.982 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjQ4MGMyYjhlZDU5YjlhZjJiYmQ3ZDFlOWVlNDA3ZDViNjIwZGU4MjAyZTBkZjJm1ti+3A==: 00:31:43.982 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:43.982 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:43.982 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTgxMGQ0NDQzNTQzYWU5OTUyZGIwOWQ3MzBiYjc4OTNlYTMzOTE1NmZjNjYyMDlixkJ1jw==: 00:31:43.982 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjQ4MGMyYjhlZDU5YjlhZjJiYmQ3ZDFlOWVlNDA3ZDViNjIwZGU4MjAyZTBkZjJm1ti+3A==: ]] 00:31:43.982 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjQ4MGMyYjhlZDU5YjlhZjJiYmQ3ZDFlOWVlNDA3ZDViNjIwZGU4MjAyZTBkZjJm1ti+3A==: 00:31:43.982 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:31:43.982 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:43.982 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:43.982 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:43.982 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:43.982 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:43.982 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:31:43.982 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:43.982 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.982 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:43.982 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:43.982 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:43.982 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:43.983 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:43.983 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:43.983 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:43.983 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:43.983 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:43.983 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:43.983 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:43.983 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:43.983 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:43.983 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:43.983 18:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:44.241 nvme0n1 00:31:44.241 18:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:44.241 18:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:44.241 18:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:44.241 18:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:44.241 18:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:44.241 18:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:44.499 18:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:44.499 18:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:44.499 18:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:44.499 18:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:44.499 18:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:44.499 18:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:44.499 18:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:31:44.499 18:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:44.499 18:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:44.499 18:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:44.499 18:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:44.499 18:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NThmZmIyM2EzODMwNDQyMTM5M2ZhYmI2YTYyODljNTcqEzE7: 00:31:44.499 18:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDE2ZDc2OWJlMGM5ZGI4NDgxNjY5MWU3M2ExZDExMTVpVChV: 00:31:44.499 18:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:44.499 18:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:44.499 18:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NThmZmIyM2EzODMwNDQyMTM5M2ZhYmI2YTYyODljNTcqEzE7: 00:31:44.499 18:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDE2ZDc2OWJlMGM5ZGI4NDgxNjY5MWU3M2ExZDExMTVpVChV: ]] 00:31:44.499 18:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDE2ZDc2OWJlMGM5ZGI4NDgxNjY5MWU3M2ExZDExMTVpVChV: 00:31:44.499 18:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:31:44.499 18:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:44.499 18:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:44.499 18:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:44.499 18:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:44.499 18:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:44.499 18:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:31:44.499 18:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:44.499 18:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:44.499 18:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:44.499 18:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:44.499 18:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:44.499 18:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:44.499 18:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:44.499 18:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:44.499 18:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:44.499 18:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:44.499 18:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:44.499 18:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:44.499 18:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:44.499 18:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:44.499 18:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:44.499 18:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:44.499 18:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:44.758 nvme0n1 00:31:44.758 18:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:44.758 18:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:44.758 18:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:44.758 18:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:44.758 18:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:44.758 18:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:44.758 18:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:44.758 18:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:44.758 18:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:44.758 18:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:44.758 18:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:44.758 18:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:44.758 18:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:31:44.758 18:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:44.758 18:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:44.758 18:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:44.758 18:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:44.758 18:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWIzOTZiN2M4MmFkZWVhOTRjNWZiMjQzNDNiOGU0YjY1Y2FlMWQ4ZTQ3MGM3MzgwcCPSTg==: 00:31:44.758 18:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzBkNTRmYTE4MDBlZGYxMmFiMDM1NzI4MDQxNzJjOTL5NgzA: 00:31:44.758 18:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:44.758 18:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:44.758 18:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWIzOTZiN2M4MmFkZWVhOTRjNWZiMjQzNDNiOGU0YjY1Y2FlMWQ4ZTQ3MGM3MzgwcCPSTg==: 00:31:44.758 18:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzBkNTRmYTE4MDBlZGYxMmFiMDM1NzI4MDQxNzJjOTL5NgzA: ]] 00:31:44.758 18:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzBkNTRmYTE4MDBlZGYxMmFiMDM1NzI4MDQxNzJjOTL5NgzA: 00:31:44.758 18:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:31:44.758 18:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:44.758 18:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:44.758 18:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:44.758 18:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:44.758 18:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:44.758 18:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:31:44.758 18:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:44.758 18:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:44.758 18:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:44.758 18:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:44.758 18:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:44.758 18:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:44.758 18:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:44.758 18:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:44.758 18:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:44.758 18:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:44.758 18:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:44.758 18:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:44.758 18:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:44.758 18:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:44.758 18:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:44.758 18:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:44.758 18:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.324 nvme0n1 00:31:45.324 18:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:45.324 18:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:45.324 18:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:45.324 18:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:45.324 18:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.324 18:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:45.324 18:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:45.324 18:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:45.324 18:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:45.324 18:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.324 18:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:45.324 18:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:45.324 18:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:31:45.324 18:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:45.324 18:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:45.324 18:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:45.324 18:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:45.324 18:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWIxMWM2YWFhNmU1MmVkYmNiOGYxYjA1MDQ4NTY2YjMyOTYyMDhiYTA0ZjIxYWU4OTdhNjY5ZjM0MmMyYTk4MkOR7WI=: 00:31:45.324 18:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:45.324 18:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:45.324 18:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:45.324 18:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWIxMWM2YWFhNmU1MmVkYmNiOGYxYjA1MDQ4NTY2YjMyOTYyMDhiYTA0ZjIxYWU4OTdhNjY5ZjM0MmMyYTk4MkOR7WI=: 00:31:45.324 18:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:45.324 18:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:31:45.324 18:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:45.324 18:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:45.324 18:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:45.324 18:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:45.324 18:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:45.324 18:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:31:45.324 18:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:45.324 18:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.324 18:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:45.324 18:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:45.324 18:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:45.324 18:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:45.324 18:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:45.324 18:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:45.324 18:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:45.324 18:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:45.324 18:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:45.324 18:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:45.324 18:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:45.324 18:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:45.324 18:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:45.324 18:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:45.324 18:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.582 nvme0n1 00:31:45.582 18:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:45.583 18:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:45.583 18:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:45.583 18:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:45.583 18:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.583 18:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:45.583 18:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:45.583 18:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:45.583 18:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:45.583 18:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.583 18:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:45.583 18:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:45.583 18:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:45.583 18:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:31:45.583 18:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:45.583 18:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:45.583 18:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:45.583 18:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:45.583 18:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGY1MjU0YzBjYmEwZjI1OTg5NWRlNDRjNmZjZjRlMDcVejsF: 00:31:45.583 18:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmRiNmNlOGYxYTljYjJhZGY3YTBkZTZjZDg1NWUyMDc1ZmI5MzJiZTA0MjIyZTEyMzVmOWVmZDdiYjkxOTQxYRdZHQ4=: 00:31:45.583 18:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:45.583 18:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:45.583 18:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGY1MjU0YzBjYmEwZjI1OTg5NWRlNDRjNmZjZjRlMDcVejsF: 00:31:45.583 18:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmRiNmNlOGYxYTljYjJhZGY3YTBkZTZjZDg1NWUyMDc1ZmI5MzJiZTA0MjIyZTEyMzVmOWVmZDdiYjkxOTQxYRdZHQ4=: ]] 00:31:45.583 18:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmRiNmNlOGYxYTljYjJhZGY3YTBkZTZjZDg1NWUyMDc1ZmI5MzJiZTA0MjIyZTEyMzVmOWVmZDdiYjkxOTQxYRdZHQ4=: 00:31:45.583 18:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:31:45.583 18:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:45.583 18:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:45.583 18:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:45.583 18:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:45.583 18:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:45.583 18:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:31:45.583 18:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:45.583 18:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.583 18:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:45.583 18:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:45.583 18:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:45.583 18:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:45.583 18:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:45.583 18:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:45.583 18:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:45.583 18:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:45.583 18:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:45.583 18:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:45.583 18:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:45.583 18:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:45.583 18:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:45.583 18:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:45.583 18:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.150 nvme0n1 00:31:46.150 18:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:46.150 18:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:46.150 18:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:46.150 18:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.150 18:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:46.150 18:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:46.409 18:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:46.409 18:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:46.409 18:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:46.409 18:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.409 18:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:46.409 18:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:46.409 18:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:31:46.409 18:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:46.409 18:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:46.409 18:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:46.409 18:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:46.409 18:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTgxMGQ0NDQzNTQzYWU5OTUyZGIwOWQ3MzBiYjc4OTNlYTMzOTE1NmZjNjYyMDlixkJ1jw==: 00:31:46.409 18:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjQ4MGMyYjhlZDU5YjlhZjJiYmQ3ZDFlOWVlNDA3ZDViNjIwZGU4MjAyZTBkZjJm1ti+3A==: 00:31:46.409 18:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:46.409 18:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:46.409 18:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTgxMGQ0NDQzNTQzYWU5OTUyZGIwOWQ3MzBiYjc4OTNlYTMzOTE1NmZjNjYyMDlixkJ1jw==: 00:31:46.409 18:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjQ4MGMyYjhlZDU5YjlhZjJiYmQ3ZDFlOWVlNDA3ZDViNjIwZGU4MjAyZTBkZjJm1ti+3A==: ]] 00:31:46.409 18:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjQ4MGMyYjhlZDU5YjlhZjJiYmQ3ZDFlOWVlNDA3ZDViNjIwZGU4MjAyZTBkZjJm1ti+3A==: 00:31:46.409 18:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:31:46.409 18:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:46.409 18:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:46.409 18:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:46.409 18:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:46.409 18:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:46.409 18:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:31:46.409 18:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:46.409 18:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.409 18:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:46.409 18:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:46.409 18:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:46.409 18:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:46.409 18:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:46.409 18:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:46.409 18:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:46.409 18:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:46.409 18:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:46.409 18:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:46.409 18:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:46.409 18:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:46.409 18:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:46.409 18:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:46.409 18:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.976 nvme0n1 00:31:46.976 18:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:46.976 18:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:46.976 18:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:46.976 18:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:46.976 18:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.976 18:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:46.976 18:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:46.976 18:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:46.976 18:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:46.976 18:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.976 18:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:46.976 18:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:46.976 18:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:31:46.976 18:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:46.976 18:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:46.976 18:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:46.976 18:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:46.976 18:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NThmZmIyM2EzODMwNDQyMTM5M2ZhYmI2YTYyODljNTcqEzE7: 00:31:46.976 18:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDE2ZDc2OWJlMGM5ZGI4NDgxNjY5MWU3M2ExZDExMTVpVChV: 00:31:46.976 18:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:46.976 18:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:46.976 18:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NThmZmIyM2EzODMwNDQyMTM5M2ZhYmI2YTYyODljNTcqEzE7: 00:31:46.976 18:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDE2ZDc2OWJlMGM5ZGI4NDgxNjY5MWU3M2ExZDExMTVpVChV: ]] 00:31:46.976 18:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDE2ZDc2OWJlMGM5ZGI4NDgxNjY5MWU3M2ExZDExMTVpVChV: 00:31:46.976 18:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:31:46.976 18:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:46.976 18:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:46.976 18:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:46.976 18:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:46.976 18:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:46.976 18:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:31:46.976 18:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:46.976 18:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.976 18:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:46.976 18:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:46.976 18:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:46.976 18:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:46.976 18:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:46.976 18:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:46.976 18:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:46.976 18:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:46.976 18:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:46.976 18:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:46.976 18:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:46.976 18:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:46.976 18:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:46.976 18:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:46.976 18:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.543 nvme0n1 00:31:47.543 18:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:47.543 18:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:47.543 18:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:47.543 18:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:47.543 18:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.543 18:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:47.802 18:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:47.802 18:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:47.802 18:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:47.802 18:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.802 18:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:47.802 18:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:47.802 18:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:31:47.802 18:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:47.802 18:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:47.802 18:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:47.802 18:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:47.802 18:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWIzOTZiN2M4MmFkZWVhOTRjNWZiMjQzNDNiOGU0YjY1Y2FlMWQ4ZTQ3MGM3MzgwcCPSTg==: 00:31:47.802 18:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzBkNTRmYTE4MDBlZGYxMmFiMDM1NzI4MDQxNzJjOTL5NgzA: 00:31:47.802 18:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:47.802 18:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:47.802 18:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWIzOTZiN2M4MmFkZWVhOTRjNWZiMjQzNDNiOGU0YjY1Y2FlMWQ4ZTQ3MGM3MzgwcCPSTg==: 00:31:47.802 18:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzBkNTRmYTE4MDBlZGYxMmFiMDM1NzI4MDQxNzJjOTL5NgzA: ]] 00:31:47.802 18:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzBkNTRmYTE4MDBlZGYxMmFiMDM1NzI4MDQxNzJjOTL5NgzA: 00:31:47.802 18:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:31:47.802 18:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:47.802 18:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:47.802 18:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:47.802 18:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:47.802 18:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:47.802 18:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:31:47.802 18:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:47.802 18:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.802 18:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:47.802 18:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:47.802 18:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:47.802 18:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:47.802 18:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:47.802 18:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:47.802 18:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:47.802 18:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:47.802 18:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:47.802 18:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:47.802 18:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:47.802 18:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:47.802 18:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:47.802 18:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:47.802 18:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.369 nvme0n1 00:31:48.369 18:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:48.369 18:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:48.369 18:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:48.369 18:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:48.369 18:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.370 18:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:48.370 18:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:48.370 18:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:48.370 18:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:48.370 18:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.370 18:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:48.370 18:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:48.370 18:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:31:48.370 18:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:48.370 18:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:48.370 18:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:48.370 18:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:48.370 18:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWIxMWM2YWFhNmU1MmVkYmNiOGYxYjA1MDQ4NTY2YjMyOTYyMDhiYTA0ZjIxYWU4OTdhNjY5ZjM0MmMyYTk4MkOR7WI=: 00:31:48.370 18:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:48.370 18:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:48.370 18:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:48.370 18:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWIxMWM2YWFhNmU1MmVkYmNiOGYxYjA1MDQ4NTY2YjMyOTYyMDhiYTA0ZjIxYWU4OTdhNjY5ZjM0MmMyYTk4MkOR7WI=: 00:31:48.370 18:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:48.370 18:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:31:48.370 18:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:48.370 18:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:48.370 18:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:48.370 18:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:48.370 18:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:48.370 18:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:31:48.370 18:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:48.370 18:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.370 18:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:48.370 18:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:48.370 18:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:48.370 18:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:48.370 18:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:48.370 18:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:48.370 18:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:48.370 18:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:48.370 18:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:48.370 18:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:48.370 18:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:48.370 18:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:48.370 18:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:48.370 18:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:48.370 18:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.938 nvme0n1 00:31:48.938 18:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:48.938 18:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:48.938 18:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:48.938 18:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:48.938 18:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.938 18:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:48.938 18:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:48.938 18:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:48.938 18:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:48.938 18:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.197 18:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:49.197 18:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:31:49.197 18:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:49.197 18:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:49.197 18:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:31:49.197 18:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:49.197 18:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:49.197 18:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:49.197 18:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:49.197 18:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGY1MjU0YzBjYmEwZjI1OTg5NWRlNDRjNmZjZjRlMDcVejsF: 00:31:49.197 18:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmRiNmNlOGYxYTljYjJhZGY3YTBkZTZjZDg1NWUyMDc1ZmI5MzJiZTA0MjIyZTEyMzVmOWVmZDdiYjkxOTQxYRdZHQ4=: 00:31:49.197 18:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:49.197 18:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:49.197 18:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGY1MjU0YzBjYmEwZjI1OTg5NWRlNDRjNmZjZjRlMDcVejsF: 00:31:49.197 18:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmRiNmNlOGYxYTljYjJhZGY3YTBkZTZjZDg1NWUyMDc1ZmI5MzJiZTA0MjIyZTEyMzVmOWVmZDdiYjkxOTQxYRdZHQ4=: ]] 00:31:49.197 18:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmRiNmNlOGYxYTljYjJhZGY3YTBkZTZjZDg1NWUyMDc1ZmI5MzJiZTA0MjIyZTEyMzVmOWVmZDdiYjkxOTQxYRdZHQ4=: 00:31:49.197 18:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:31:49.197 18:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:49.197 18:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:49.197 18:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:49.197 18:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:49.197 18:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:49.197 18:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:31:49.197 18:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:49.197 18:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.197 18:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:49.197 18:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:49.197 18:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:49.197 18:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:49.197 18:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:49.197 18:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:49.197 18:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:49.198 18:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:49.198 18:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:49.198 18:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:49.198 18:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:49.198 18:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:49.198 18:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:49.198 18:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:49.198 18:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.198 nvme0n1 00:31:49.198 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:49.198 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:49.198 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:49.198 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:49.198 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.198 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:49.198 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:49.198 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:49.198 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:49.198 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.198 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:49.198 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:49.198 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:31:49.198 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:49.198 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:49.198 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:49.198 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:49.198 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTgxMGQ0NDQzNTQzYWU5OTUyZGIwOWQ3MzBiYjc4OTNlYTMzOTE1NmZjNjYyMDlixkJ1jw==: 00:31:49.198 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjQ4MGMyYjhlZDU5YjlhZjJiYmQ3ZDFlOWVlNDA3ZDViNjIwZGU4MjAyZTBkZjJm1ti+3A==: 00:31:49.198 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:49.198 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:49.198 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTgxMGQ0NDQzNTQzYWU5OTUyZGIwOWQ3MzBiYjc4OTNlYTMzOTE1NmZjNjYyMDlixkJ1jw==: 00:31:49.198 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjQ4MGMyYjhlZDU5YjlhZjJiYmQ3ZDFlOWVlNDA3ZDViNjIwZGU4MjAyZTBkZjJm1ti+3A==: ]] 00:31:49.198 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjQ4MGMyYjhlZDU5YjlhZjJiYmQ3ZDFlOWVlNDA3ZDViNjIwZGU4MjAyZTBkZjJm1ti+3A==: 00:31:49.198 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:31:49.198 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:49.198 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:49.198 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:49.198 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:49.198 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:49.198 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:31:49.198 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:49.198 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.198 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:49.198 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:49.198 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:49.198 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:49.198 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:49.198 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:49.198 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:49.198 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:49.198 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:49.198 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:49.198 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:49.198 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:49.198 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:49.198 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:49.198 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.457 nvme0n1 00:31:49.457 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:49.457 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:49.457 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:49.457 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:49.457 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.457 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:49.457 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:49.457 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:49.457 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:49.457 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.457 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:49.457 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:49.457 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:31:49.457 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:49.457 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:49.457 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:49.457 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:49.457 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NThmZmIyM2EzODMwNDQyMTM5M2ZhYmI2YTYyODljNTcqEzE7: 00:31:49.457 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDE2ZDc2OWJlMGM5ZGI4NDgxNjY5MWU3M2ExZDExMTVpVChV: 00:31:49.457 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:49.457 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:49.457 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NThmZmIyM2EzODMwNDQyMTM5M2ZhYmI2YTYyODljNTcqEzE7: 00:31:49.457 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDE2ZDc2OWJlMGM5ZGI4NDgxNjY5MWU3M2ExZDExMTVpVChV: ]] 00:31:49.457 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDE2ZDc2OWJlMGM5ZGI4NDgxNjY5MWU3M2ExZDExMTVpVChV: 00:31:49.457 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:31:49.457 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:49.457 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:49.457 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:49.457 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:49.457 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:49.457 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:31:49.457 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:49.457 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.457 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:49.457 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:49.457 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:49.457 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:49.457 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:49.457 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:49.457 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:49.457 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:49.457 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:49.457 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:49.457 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:49.457 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:49.457 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:49.457 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:49.457 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.457 nvme0n1 00:31:49.457 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:49.457 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:49.457 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:49.457 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:49.457 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.716 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:49.716 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:49.716 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:49.716 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:49.716 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.716 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:49.716 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:49.716 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:31:49.716 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:49.716 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:49.716 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:49.716 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:49.716 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWIzOTZiN2M4MmFkZWVhOTRjNWZiMjQzNDNiOGU0YjY1Y2FlMWQ4ZTQ3MGM3MzgwcCPSTg==: 00:31:49.716 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzBkNTRmYTE4MDBlZGYxMmFiMDM1NzI4MDQxNzJjOTL5NgzA: 00:31:49.716 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:49.716 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:49.716 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWIzOTZiN2M4MmFkZWVhOTRjNWZiMjQzNDNiOGU0YjY1Y2FlMWQ4ZTQ3MGM3MzgwcCPSTg==: 00:31:49.716 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzBkNTRmYTE4MDBlZGYxMmFiMDM1NzI4MDQxNzJjOTL5NgzA: ]] 00:31:49.716 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzBkNTRmYTE4MDBlZGYxMmFiMDM1NzI4MDQxNzJjOTL5NgzA: 00:31:49.716 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:31:49.716 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:49.716 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:49.716 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:49.716 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:49.716 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:49.716 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:31:49.716 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:49.716 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.716 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:49.716 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:49.716 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:49.716 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:49.716 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:49.716 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:49.716 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:49.716 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:49.716 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:49.716 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:49.716 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:49.716 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:49.716 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:49.716 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:49.716 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.716 nvme0n1 00:31:49.716 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:49.716 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:49.716 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:49.716 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:49.716 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.716 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:49.716 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:49.716 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:49.717 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:49.717 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.717 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:49.717 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:49.717 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:31:49.717 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:49.717 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:49.717 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:49.717 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:49.717 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWIxMWM2YWFhNmU1MmVkYmNiOGYxYjA1MDQ4NTY2YjMyOTYyMDhiYTA0ZjIxYWU4OTdhNjY5ZjM0MmMyYTk4MkOR7WI=: 00:31:49.717 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:49.717 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:49.717 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:49.717 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWIxMWM2YWFhNmU1MmVkYmNiOGYxYjA1MDQ4NTY2YjMyOTYyMDhiYTA0ZjIxYWU4OTdhNjY5ZjM0MmMyYTk4MkOR7WI=: 00:31:49.717 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:49.717 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:31:49.717 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:49.717 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:49.717 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:49.717 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:49.717 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:49.717 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:31:49.717 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:49.717 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.976 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:49.976 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:49.976 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:49.976 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:49.976 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:49.976 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:49.976 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:49.976 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:49.976 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:49.976 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:49.976 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:49.976 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:49.976 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:49.976 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:49.976 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.976 nvme0n1 00:31:49.976 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:49.976 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:49.976 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:49.976 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:49.976 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.976 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:49.976 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:49.976 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:49.976 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:49.976 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.976 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:49.976 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:49.976 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:49.976 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:31:49.976 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:49.976 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:49.976 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:49.976 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:49.976 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGY1MjU0YzBjYmEwZjI1OTg5NWRlNDRjNmZjZjRlMDcVejsF: 00:31:49.976 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmRiNmNlOGYxYTljYjJhZGY3YTBkZTZjZDg1NWUyMDc1ZmI5MzJiZTA0MjIyZTEyMzVmOWVmZDdiYjkxOTQxYRdZHQ4=: 00:31:49.976 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:49.976 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:49.976 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGY1MjU0YzBjYmEwZjI1OTg5NWRlNDRjNmZjZjRlMDcVejsF: 00:31:49.976 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmRiNmNlOGYxYTljYjJhZGY3YTBkZTZjZDg1NWUyMDc1ZmI5MzJiZTA0MjIyZTEyMzVmOWVmZDdiYjkxOTQxYRdZHQ4=: ]] 00:31:49.976 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmRiNmNlOGYxYTljYjJhZGY3YTBkZTZjZDg1NWUyMDc1ZmI5MzJiZTA0MjIyZTEyMzVmOWVmZDdiYjkxOTQxYRdZHQ4=: 00:31:49.976 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:31:49.976 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:49.976 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:49.976 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:49.976 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:49.976 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:49.976 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:31:49.976 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:49.976 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.976 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:49.976 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:49.976 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:49.976 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:49.976 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:49.976 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:49.976 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:49.976 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:49.976 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:49.976 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:49.976 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:49.976 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:49.976 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:49.976 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:49.976 18:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.235 nvme0n1 00:31:50.235 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:50.235 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:50.235 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:50.235 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:50.235 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.235 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:50.235 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:50.235 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:50.235 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:50.235 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.235 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:50.235 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:50.235 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:31:50.235 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:50.235 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:50.235 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:50.235 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:50.235 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTgxMGQ0NDQzNTQzYWU5OTUyZGIwOWQ3MzBiYjc4OTNlYTMzOTE1NmZjNjYyMDlixkJ1jw==: 00:31:50.235 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjQ4MGMyYjhlZDU5YjlhZjJiYmQ3ZDFlOWVlNDA3ZDViNjIwZGU4MjAyZTBkZjJm1ti+3A==: 00:31:50.235 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:50.235 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:50.235 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTgxMGQ0NDQzNTQzYWU5OTUyZGIwOWQ3MzBiYjc4OTNlYTMzOTE1NmZjNjYyMDlixkJ1jw==: 00:31:50.235 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjQ4MGMyYjhlZDU5YjlhZjJiYmQ3ZDFlOWVlNDA3ZDViNjIwZGU4MjAyZTBkZjJm1ti+3A==: ]] 00:31:50.235 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjQ4MGMyYjhlZDU5YjlhZjJiYmQ3ZDFlOWVlNDA3ZDViNjIwZGU4MjAyZTBkZjJm1ti+3A==: 00:31:50.235 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:31:50.235 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:50.235 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:50.235 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:50.235 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:50.235 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:50.235 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:31:50.235 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:50.235 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.235 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:50.235 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:50.235 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:50.235 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:50.235 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:50.235 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:50.235 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:50.235 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:50.235 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:50.235 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:50.235 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:50.235 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:50.235 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:50.235 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:50.235 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.494 nvme0n1 00:31:50.494 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:50.494 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:50.494 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:50.494 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:50.494 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.494 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:50.494 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:50.494 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:50.494 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:50.494 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.494 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:50.494 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:50.494 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:31:50.494 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:50.494 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:50.494 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:50.494 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:50.494 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NThmZmIyM2EzODMwNDQyMTM5M2ZhYmI2YTYyODljNTcqEzE7: 00:31:50.494 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDE2ZDc2OWJlMGM5ZGI4NDgxNjY5MWU3M2ExZDExMTVpVChV: 00:31:50.494 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:50.494 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:50.494 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NThmZmIyM2EzODMwNDQyMTM5M2ZhYmI2YTYyODljNTcqEzE7: 00:31:50.494 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDE2ZDc2OWJlMGM5ZGI4NDgxNjY5MWU3M2ExZDExMTVpVChV: ]] 00:31:50.494 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDE2ZDc2OWJlMGM5ZGI4NDgxNjY5MWU3M2ExZDExMTVpVChV: 00:31:50.494 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:31:50.494 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:50.494 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:50.494 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:50.494 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:50.494 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:50.494 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:31:50.494 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:50.494 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.494 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:50.494 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:50.494 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:50.494 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:50.494 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:50.494 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:50.494 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:50.494 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:50.494 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:50.494 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:50.494 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:50.494 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:50.494 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:50.494 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:50.494 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.494 nvme0n1 00:31:50.494 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:50.494 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:50.494 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:50.494 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.494 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:50.801 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:50.801 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:50.801 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:50.801 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:50.801 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.801 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:50.801 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:50.801 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:31:50.801 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:50.801 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:50.801 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:50.801 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:50.801 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWIzOTZiN2M4MmFkZWVhOTRjNWZiMjQzNDNiOGU0YjY1Y2FlMWQ4ZTQ3MGM3MzgwcCPSTg==: 00:31:50.801 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzBkNTRmYTE4MDBlZGYxMmFiMDM1NzI4MDQxNzJjOTL5NgzA: 00:31:50.801 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:50.801 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:50.801 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWIzOTZiN2M4MmFkZWVhOTRjNWZiMjQzNDNiOGU0YjY1Y2FlMWQ4ZTQ3MGM3MzgwcCPSTg==: 00:31:50.801 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzBkNTRmYTE4MDBlZGYxMmFiMDM1NzI4MDQxNzJjOTL5NgzA: ]] 00:31:50.801 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzBkNTRmYTE4MDBlZGYxMmFiMDM1NzI4MDQxNzJjOTL5NgzA: 00:31:50.801 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:31:50.801 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:50.801 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:50.801 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:50.801 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:50.801 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:50.801 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:31:50.801 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:50.801 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.801 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:50.801 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:50.801 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:50.801 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:50.801 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:50.801 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:50.801 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:50.801 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:50.801 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:50.801 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:50.801 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:50.801 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:50.801 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:50.801 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:50.801 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.801 nvme0n1 00:31:50.801 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:50.801 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:50.801 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:50.801 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:50.801 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.801 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:50.801 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:50.801 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:50.801 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:50.801 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.801 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:50.801 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:50.801 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:31:50.801 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:50.801 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:50.801 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:50.801 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:50.801 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWIxMWM2YWFhNmU1MmVkYmNiOGYxYjA1MDQ4NTY2YjMyOTYyMDhiYTA0ZjIxYWU4OTdhNjY5ZjM0MmMyYTk4MkOR7WI=: 00:31:50.801 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:50.801 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:50.801 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:50.801 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWIxMWM2YWFhNmU1MmVkYmNiOGYxYjA1MDQ4NTY2YjMyOTYyMDhiYTA0ZjIxYWU4OTdhNjY5ZjM0MmMyYTk4MkOR7WI=: 00:31:50.801 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:50.801 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:31:50.801 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:50.801 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:50.801 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:50.801 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:50.801 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:50.801 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:31:50.801 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:50.801 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.801 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:50.801 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:50.802 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:50.802 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:50.802 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:50.802 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:50.802 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:50.802 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:50.802 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:50.802 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:50.802 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:50.802 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:50.802 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:50.802 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:50.802 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.060 nvme0n1 00:31:51.060 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:51.060 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:51.060 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:51.060 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:51.060 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.060 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:51.060 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:51.060 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:51.060 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:51.060 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.060 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:51.060 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:51.060 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:51.060 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:31:51.060 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:51.060 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:51.060 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:51.060 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:51.060 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGY1MjU0YzBjYmEwZjI1OTg5NWRlNDRjNmZjZjRlMDcVejsF: 00:31:51.060 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmRiNmNlOGYxYTljYjJhZGY3YTBkZTZjZDg1NWUyMDc1ZmI5MzJiZTA0MjIyZTEyMzVmOWVmZDdiYjkxOTQxYRdZHQ4=: 00:31:51.060 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:51.060 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:51.060 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGY1MjU0YzBjYmEwZjI1OTg5NWRlNDRjNmZjZjRlMDcVejsF: 00:31:51.060 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmRiNmNlOGYxYTljYjJhZGY3YTBkZTZjZDg1NWUyMDc1ZmI5MzJiZTA0MjIyZTEyMzVmOWVmZDdiYjkxOTQxYRdZHQ4=: ]] 00:31:51.060 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmRiNmNlOGYxYTljYjJhZGY3YTBkZTZjZDg1NWUyMDc1ZmI5MzJiZTA0MjIyZTEyMzVmOWVmZDdiYjkxOTQxYRdZHQ4=: 00:31:51.060 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:31:51.060 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:51.061 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:51.061 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:51.061 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:51.061 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:51.061 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:31:51.061 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:51.061 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.061 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:51.061 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:51.061 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:51.061 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:51.061 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:51.061 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:51.061 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:51.061 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:51.061 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:51.061 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:51.061 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:51.061 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:51.061 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:51.061 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:51.061 18:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.320 nvme0n1 00:31:51.320 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:51.320 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:51.320 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:51.320 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:51.320 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.320 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:51.320 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:51.320 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:51.320 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:51.320 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.320 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:51.320 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:51.320 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:31:51.320 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:51.320 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:51.320 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:51.320 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:51.320 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTgxMGQ0NDQzNTQzYWU5OTUyZGIwOWQ3MzBiYjc4OTNlYTMzOTE1NmZjNjYyMDlixkJ1jw==: 00:31:51.320 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjQ4MGMyYjhlZDU5YjlhZjJiYmQ3ZDFlOWVlNDA3ZDViNjIwZGU4MjAyZTBkZjJm1ti+3A==: 00:31:51.320 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:51.320 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:51.320 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTgxMGQ0NDQzNTQzYWU5OTUyZGIwOWQ3MzBiYjc4OTNlYTMzOTE1NmZjNjYyMDlixkJ1jw==: 00:31:51.320 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjQ4MGMyYjhlZDU5YjlhZjJiYmQ3ZDFlOWVlNDA3ZDViNjIwZGU4MjAyZTBkZjJm1ti+3A==: ]] 00:31:51.320 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjQ4MGMyYjhlZDU5YjlhZjJiYmQ3ZDFlOWVlNDA3ZDViNjIwZGU4MjAyZTBkZjJm1ti+3A==: 00:31:51.320 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:31:51.320 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:51.320 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:51.320 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:51.320 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:51.320 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:51.320 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:31:51.320 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:51.320 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.320 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:51.320 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:51.320 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:51.320 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:51.320 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:51.320 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:51.320 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:51.320 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:51.320 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:51.320 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:51.320 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:51.320 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:51.320 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:51.320 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:51.320 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.578 nvme0n1 00:31:51.578 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:51.578 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:51.578 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:51.578 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:51.578 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.578 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:51.578 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:51.578 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:51.578 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:51.578 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.578 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:51.578 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:51.578 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:31:51.578 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:51.578 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:51.578 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:51.578 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:51.578 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NThmZmIyM2EzODMwNDQyMTM5M2ZhYmI2YTYyODljNTcqEzE7: 00:31:51.578 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDE2ZDc2OWJlMGM5ZGI4NDgxNjY5MWU3M2ExZDExMTVpVChV: 00:31:51.578 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:51.578 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:51.578 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NThmZmIyM2EzODMwNDQyMTM5M2ZhYmI2YTYyODljNTcqEzE7: 00:31:51.578 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDE2ZDc2OWJlMGM5ZGI4NDgxNjY5MWU3M2ExZDExMTVpVChV: ]] 00:31:51.578 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDE2ZDc2OWJlMGM5ZGI4NDgxNjY5MWU3M2ExZDExMTVpVChV: 00:31:51.578 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:31:51.578 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:51.578 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:51.578 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:51.578 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:51.578 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:51.578 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:31:51.578 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:51.578 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.578 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:51.578 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:51.578 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:51.578 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:51.578 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:51.578 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:51.578 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:51.578 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:51.578 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:51.578 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:51.578 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:51.578 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:51.578 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:51.578 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:51.578 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.837 nvme0n1 00:31:51.837 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:51.837 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:51.837 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:51.837 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.837 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:51.837 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:51.837 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:51.837 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:51.837 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:51.837 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.837 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:51.837 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:51.837 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:31:51.837 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:51.837 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:51.837 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:51.837 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:51.837 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWIzOTZiN2M4MmFkZWVhOTRjNWZiMjQzNDNiOGU0YjY1Y2FlMWQ4ZTQ3MGM3MzgwcCPSTg==: 00:31:51.837 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzBkNTRmYTE4MDBlZGYxMmFiMDM1NzI4MDQxNzJjOTL5NgzA: 00:31:51.837 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:51.837 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:51.837 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWIzOTZiN2M4MmFkZWVhOTRjNWZiMjQzNDNiOGU0YjY1Y2FlMWQ4ZTQ3MGM3MzgwcCPSTg==: 00:31:51.837 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzBkNTRmYTE4MDBlZGYxMmFiMDM1NzI4MDQxNzJjOTL5NgzA: ]] 00:31:51.837 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzBkNTRmYTE4MDBlZGYxMmFiMDM1NzI4MDQxNzJjOTL5NgzA: 00:31:51.837 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:31:51.837 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:51.837 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:51.837 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:51.837 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:51.837 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:51.837 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:31:51.837 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:51.837 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.837 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:51.837 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:51.837 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:51.837 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:51.837 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:51.837 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:51.837 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:51.837 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:51.837 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:51.837 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:51.837 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:51.837 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:51.837 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:51.837 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:51.837 18:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.096 nvme0n1 00:31:52.096 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:52.096 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:52.096 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:52.096 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.096 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:52.096 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:52.096 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:52.096 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:52.096 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:52.096 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.096 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:52.096 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:52.096 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:31:52.096 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:52.096 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:52.096 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:52.096 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:52.096 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWIxMWM2YWFhNmU1MmVkYmNiOGYxYjA1MDQ4NTY2YjMyOTYyMDhiYTA0ZjIxYWU4OTdhNjY5ZjM0MmMyYTk4MkOR7WI=: 00:31:52.096 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:52.096 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:52.096 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:52.096 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWIxMWM2YWFhNmU1MmVkYmNiOGYxYjA1MDQ4NTY2YjMyOTYyMDhiYTA0ZjIxYWU4OTdhNjY5ZjM0MmMyYTk4MkOR7WI=: 00:31:52.096 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:52.096 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:31:52.096 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:52.096 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:52.096 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:52.096 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:52.096 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:52.096 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:31:52.096 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:52.096 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.096 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:52.096 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:52.096 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:52.096 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:52.096 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:52.096 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:52.096 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:52.096 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:52.096 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:52.096 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:52.096 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:52.096 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:52.355 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:52.355 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:52.355 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.355 nvme0n1 00:31:52.355 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:52.355 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:52.355 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:52.355 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:52.355 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.355 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:52.355 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:52.355 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:52.355 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:52.355 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.355 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:52.355 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:52.355 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:52.355 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:31:52.355 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:52.355 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:52.355 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:52.355 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:52.355 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGY1MjU0YzBjYmEwZjI1OTg5NWRlNDRjNmZjZjRlMDcVejsF: 00:31:52.355 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmRiNmNlOGYxYTljYjJhZGY3YTBkZTZjZDg1NWUyMDc1ZmI5MzJiZTA0MjIyZTEyMzVmOWVmZDdiYjkxOTQxYRdZHQ4=: 00:31:52.355 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:52.355 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:52.356 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGY1MjU0YzBjYmEwZjI1OTg5NWRlNDRjNmZjZjRlMDcVejsF: 00:31:52.356 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmRiNmNlOGYxYTljYjJhZGY3YTBkZTZjZDg1NWUyMDc1ZmI5MzJiZTA0MjIyZTEyMzVmOWVmZDdiYjkxOTQxYRdZHQ4=: ]] 00:31:52.356 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmRiNmNlOGYxYTljYjJhZGY3YTBkZTZjZDg1NWUyMDc1ZmI5MzJiZTA0MjIyZTEyMzVmOWVmZDdiYjkxOTQxYRdZHQ4=: 00:31:52.356 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:31:52.356 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:52.356 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:52.356 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:52.356 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:52.356 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:52.356 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:31:52.356 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:52.356 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.356 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:52.615 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:52.615 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:52.615 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:52.615 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:52.615 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:52.615 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:52.615 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:52.615 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:52.615 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:52.615 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:52.615 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:52.615 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:52.615 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:52.615 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.874 nvme0n1 00:31:52.874 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:52.874 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:52.874 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:52.874 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:52.874 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.874 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:52.874 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:52.874 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:52.874 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:52.874 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.874 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:52.874 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:52.874 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:31:52.874 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:52.874 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:52.874 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:52.874 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:52.874 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTgxMGQ0NDQzNTQzYWU5OTUyZGIwOWQ3MzBiYjc4OTNlYTMzOTE1NmZjNjYyMDlixkJ1jw==: 00:31:52.874 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjQ4MGMyYjhlZDU5YjlhZjJiYmQ3ZDFlOWVlNDA3ZDViNjIwZGU4MjAyZTBkZjJm1ti+3A==: 00:31:52.874 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:52.874 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:52.874 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTgxMGQ0NDQzNTQzYWU5OTUyZGIwOWQ3MzBiYjc4OTNlYTMzOTE1NmZjNjYyMDlixkJ1jw==: 00:31:52.874 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjQ4MGMyYjhlZDU5YjlhZjJiYmQ3ZDFlOWVlNDA3ZDViNjIwZGU4MjAyZTBkZjJm1ti+3A==: ]] 00:31:52.874 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjQ4MGMyYjhlZDU5YjlhZjJiYmQ3ZDFlOWVlNDA3ZDViNjIwZGU4MjAyZTBkZjJm1ti+3A==: 00:31:52.874 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:31:52.874 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:52.874 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:52.874 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:52.874 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:52.874 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:52.874 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:31:52.874 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:52.874 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.874 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:52.874 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:52.874 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:52.874 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:52.874 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:52.874 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:52.874 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:52.874 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:52.874 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:52.874 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:52.874 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:52.874 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:52.874 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:52.874 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:52.874 18:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.441 nvme0n1 00:31:53.441 18:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:53.441 18:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:53.441 18:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:53.441 18:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:53.441 18:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.441 18:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:53.441 18:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:53.441 18:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:53.441 18:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:53.441 18:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.441 18:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:53.441 18:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:53.441 18:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:31:53.441 18:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:53.441 18:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:53.441 18:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:53.441 18:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:53.441 18:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NThmZmIyM2EzODMwNDQyMTM5M2ZhYmI2YTYyODljNTcqEzE7: 00:31:53.441 18:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDE2ZDc2OWJlMGM5ZGI4NDgxNjY5MWU3M2ExZDExMTVpVChV: 00:31:53.441 18:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:53.441 18:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:53.441 18:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NThmZmIyM2EzODMwNDQyMTM5M2ZhYmI2YTYyODljNTcqEzE7: 00:31:53.441 18:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDE2ZDc2OWJlMGM5ZGI4NDgxNjY5MWU3M2ExZDExMTVpVChV: ]] 00:31:53.441 18:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDE2ZDc2OWJlMGM5ZGI4NDgxNjY5MWU3M2ExZDExMTVpVChV: 00:31:53.441 18:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:31:53.441 18:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:53.441 18:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:53.441 18:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:53.441 18:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:53.441 18:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:53.441 18:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:31:53.441 18:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:53.441 18:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.441 18:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:53.441 18:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:53.441 18:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:53.441 18:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:53.441 18:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:53.441 18:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:53.441 18:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:53.441 18:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:53.441 18:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:53.441 18:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:53.441 18:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:53.441 18:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:53.441 18:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:53.441 18:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:53.441 18:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.700 nvme0n1 00:31:53.700 18:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:53.700 18:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:53.700 18:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:53.700 18:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.700 18:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:53.700 18:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:53.700 18:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:53.700 18:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:53.700 18:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:53.700 18:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.700 18:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:53.700 18:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:53.700 18:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:31:53.700 18:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:53.700 18:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:53.700 18:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:53.700 18:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:53.700 18:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWIzOTZiN2M4MmFkZWVhOTRjNWZiMjQzNDNiOGU0YjY1Y2FlMWQ4ZTQ3MGM3MzgwcCPSTg==: 00:31:53.700 18:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzBkNTRmYTE4MDBlZGYxMmFiMDM1NzI4MDQxNzJjOTL5NgzA: 00:31:53.700 18:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:53.700 18:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:53.700 18:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWIzOTZiN2M4MmFkZWVhOTRjNWZiMjQzNDNiOGU0YjY1Y2FlMWQ4ZTQ3MGM3MzgwcCPSTg==: 00:31:53.700 18:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzBkNTRmYTE4MDBlZGYxMmFiMDM1NzI4MDQxNzJjOTL5NgzA: ]] 00:31:53.700 18:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzBkNTRmYTE4MDBlZGYxMmFiMDM1NzI4MDQxNzJjOTL5NgzA: 00:31:53.700 18:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:31:53.700 18:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:53.700 18:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:53.700 18:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:53.700 18:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:53.700 18:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:53.700 18:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:31:53.700 18:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:53.700 18:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.700 18:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:53.700 18:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:53.700 18:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:53.700 18:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:53.700 18:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:53.700 18:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:53.700 18:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:53.700 18:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:53.700 18:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:53.700 18:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:53.700 18:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:53.700 18:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:53.700 18:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:53.700 18:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:53.700 18:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.268 nvme0n1 00:31:54.268 18:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:54.268 18:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:54.268 18:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:54.268 18:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:54.268 18:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.268 18:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:54.268 18:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:54.268 18:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:54.268 18:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:54.268 18:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.268 18:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:54.268 18:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:54.268 18:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:31:54.268 18:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:54.268 18:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:54.268 18:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:54.268 18:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:54.268 18:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWIxMWM2YWFhNmU1MmVkYmNiOGYxYjA1MDQ4NTY2YjMyOTYyMDhiYTA0ZjIxYWU4OTdhNjY5ZjM0MmMyYTk4MkOR7WI=: 00:31:54.268 18:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:54.268 18:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:54.268 18:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:54.268 18:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWIxMWM2YWFhNmU1MmVkYmNiOGYxYjA1MDQ4NTY2YjMyOTYyMDhiYTA0ZjIxYWU4OTdhNjY5ZjM0MmMyYTk4MkOR7WI=: 00:31:54.268 18:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:54.268 18:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:31:54.268 18:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:54.268 18:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:54.268 18:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:54.268 18:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:54.268 18:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:54.268 18:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:31:54.268 18:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:54.268 18:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.268 18:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:54.268 18:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:54.268 18:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:54.268 18:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:54.268 18:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:54.268 18:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:54.268 18:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:54.268 18:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:54.268 18:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:54.268 18:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:54.268 18:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:54.268 18:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:54.268 18:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:54.269 18:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:54.269 18:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.527 nvme0n1 00:31:54.527 18:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:54.527 18:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:54.527 18:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:54.527 18:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:54.527 18:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.527 18:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:54.527 18:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:54.527 18:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:54.527 18:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:54.527 18:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.527 18:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:54.527 18:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:54.527 18:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:54.527 18:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:31:54.527 18:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:54.528 18:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:54.528 18:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:54.528 18:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:54.528 18:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGY1MjU0YzBjYmEwZjI1OTg5NWRlNDRjNmZjZjRlMDcVejsF: 00:31:54.528 18:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmRiNmNlOGYxYTljYjJhZGY3YTBkZTZjZDg1NWUyMDc1ZmI5MzJiZTA0MjIyZTEyMzVmOWVmZDdiYjkxOTQxYRdZHQ4=: 00:31:54.528 18:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:54.528 18:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:54.528 18:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGY1MjU0YzBjYmEwZjI1OTg5NWRlNDRjNmZjZjRlMDcVejsF: 00:31:54.528 18:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmRiNmNlOGYxYTljYjJhZGY3YTBkZTZjZDg1NWUyMDc1ZmI5MzJiZTA0MjIyZTEyMzVmOWVmZDdiYjkxOTQxYRdZHQ4=: ]] 00:31:54.528 18:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmRiNmNlOGYxYTljYjJhZGY3YTBkZTZjZDg1NWUyMDc1ZmI5MzJiZTA0MjIyZTEyMzVmOWVmZDdiYjkxOTQxYRdZHQ4=: 00:31:54.528 18:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:31:54.528 18:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:54.528 18:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:54.528 18:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:54.528 18:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:54.528 18:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:54.528 18:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:31:54.528 18:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:54.528 18:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.528 18:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:54.528 18:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:54.528 18:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:54.528 18:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:54.528 18:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:54.528 18:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:54.528 18:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:54.528 18:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:54.528 18:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:54.528 18:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:54.528 18:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:54.528 18:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:54.528 18:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:54.528 18:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:54.528 18:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.517 nvme0n1 00:31:55.517 18:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:55.517 18:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:55.517 18:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:55.517 18:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:55.517 18:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.517 18:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:55.517 18:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:55.517 18:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:55.517 18:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:55.517 18:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.517 18:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:55.517 18:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:55.517 18:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:31:55.517 18:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:55.517 18:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:55.517 18:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:55.517 18:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:55.517 18:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTgxMGQ0NDQzNTQzYWU5OTUyZGIwOWQ3MzBiYjc4OTNlYTMzOTE1NmZjNjYyMDlixkJ1jw==: 00:31:55.517 18:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjQ4MGMyYjhlZDU5YjlhZjJiYmQ3ZDFlOWVlNDA3ZDViNjIwZGU4MjAyZTBkZjJm1ti+3A==: 00:31:55.517 18:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:55.517 18:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:55.517 18:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTgxMGQ0NDQzNTQzYWU5OTUyZGIwOWQ3MzBiYjc4OTNlYTMzOTE1NmZjNjYyMDlixkJ1jw==: 00:31:55.517 18:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjQ4MGMyYjhlZDU5YjlhZjJiYmQ3ZDFlOWVlNDA3ZDViNjIwZGU4MjAyZTBkZjJm1ti+3A==: ]] 00:31:55.517 18:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjQ4MGMyYjhlZDU5YjlhZjJiYmQ3ZDFlOWVlNDA3ZDViNjIwZGU4MjAyZTBkZjJm1ti+3A==: 00:31:55.517 18:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:31:55.517 18:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:55.517 18:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:55.517 18:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:55.517 18:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:55.517 18:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:55.517 18:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:31:55.517 18:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:55.517 18:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.517 18:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:55.517 18:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:55.517 18:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:55.517 18:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:55.517 18:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:55.517 18:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:55.517 18:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:55.517 18:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:55.517 18:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:55.517 18:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:55.517 18:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:55.517 18:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:55.517 18:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:55.517 18:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:55.517 18:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.084 nvme0n1 00:31:56.084 18:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:56.084 18:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:56.084 18:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:56.084 18:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:56.084 18:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.084 18:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:56.084 18:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:56.084 18:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:56.084 18:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:56.084 18:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.084 18:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:56.084 18:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:56.084 18:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:31:56.084 18:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:56.084 18:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:56.084 18:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:56.084 18:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:56.084 18:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NThmZmIyM2EzODMwNDQyMTM5M2ZhYmI2YTYyODljNTcqEzE7: 00:31:56.084 18:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDE2ZDc2OWJlMGM5ZGI4NDgxNjY5MWU3M2ExZDExMTVpVChV: 00:31:56.084 18:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:56.084 18:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:56.084 18:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NThmZmIyM2EzODMwNDQyMTM5M2ZhYmI2YTYyODljNTcqEzE7: 00:31:56.084 18:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDE2ZDc2OWJlMGM5ZGI4NDgxNjY5MWU3M2ExZDExMTVpVChV: ]] 00:31:56.084 18:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDE2ZDc2OWJlMGM5ZGI4NDgxNjY5MWU3M2ExZDExMTVpVChV: 00:31:56.084 18:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:31:56.084 18:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:56.084 18:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:56.084 18:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:56.084 18:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:56.084 18:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:56.084 18:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:31:56.084 18:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:56.084 18:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.084 18:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:56.084 18:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:56.084 18:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:56.084 18:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:56.084 18:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:56.084 18:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:56.084 18:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:56.084 18:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:56.084 18:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:56.084 18:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:56.084 18:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:56.084 18:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:56.084 18:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:56.084 18:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:56.084 18:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.652 nvme0n1 00:31:56.652 18:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:56.652 18:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:56.652 18:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:56.652 18:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:56.652 18:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.652 18:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:56.652 18:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:56.652 18:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:56.652 18:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:56.652 18:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.652 18:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:56.652 18:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:56.652 18:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:31:56.652 18:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:56.652 18:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:56.652 18:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:56.652 18:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:56.652 18:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWIzOTZiN2M4MmFkZWVhOTRjNWZiMjQzNDNiOGU0YjY1Y2FlMWQ4ZTQ3MGM3MzgwcCPSTg==: 00:31:56.652 18:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzBkNTRmYTE4MDBlZGYxMmFiMDM1NzI4MDQxNzJjOTL5NgzA: 00:31:56.652 18:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:56.653 18:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:56.653 18:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWIzOTZiN2M4MmFkZWVhOTRjNWZiMjQzNDNiOGU0YjY1Y2FlMWQ4ZTQ3MGM3MzgwcCPSTg==: 00:31:56.653 18:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzBkNTRmYTE4MDBlZGYxMmFiMDM1NzI4MDQxNzJjOTL5NgzA: ]] 00:31:56.653 18:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzBkNTRmYTE4MDBlZGYxMmFiMDM1NzI4MDQxNzJjOTL5NgzA: 00:31:56.653 18:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:31:56.653 18:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:56.653 18:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:56.653 18:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:56.653 18:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:56.653 18:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:56.653 18:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:31:56.653 18:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:56.653 18:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.653 18:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:56.653 18:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:56.653 18:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:56.653 18:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:56.653 18:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:56.653 18:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:56.653 18:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:56.653 18:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:56.653 18:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:56.653 18:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:56.653 18:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:56.653 18:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:56.653 18:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:56.653 18:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:56.653 18:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.592 nvme0n1 00:31:57.593 18:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:57.593 18:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:57.593 18:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:57.593 18:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:57.593 18:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.593 18:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:57.593 18:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:57.593 18:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:57.593 18:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:57.593 18:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.593 18:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:57.593 18:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:57.593 18:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:31:57.593 18:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:57.593 18:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:57.593 18:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:57.593 18:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:57.593 18:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWIxMWM2YWFhNmU1MmVkYmNiOGYxYjA1MDQ4NTY2YjMyOTYyMDhiYTA0ZjIxYWU4OTdhNjY5ZjM0MmMyYTk4MkOR7WI=: 00:31:57.593 18:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:57.593 18:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:57.593 18:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:57.593 18:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWIxMWM2YWFhNmU1MmVkYmNiOGYxYjA1MDQ4NTY2YjMyOTYyMDhiYTA0ZjIxYWU4OTdhNjY5ZjM0MmMyYTk4MkOR7WI=: 00:31:57.593 18:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:57.593 18:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:31:57.593 18:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:57.593 18:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:57.593 18:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:57.593 18:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:57.593 18:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:57.593 18:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:31:57.593 18:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:57.593 18:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.593 18:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:57.593 18:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:57.593 18:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:57.593 18:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:57.593 18:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:57.593 18:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:57.593 18:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:57.593 18:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:57.593 18:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:57.593 18:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:57.593 18:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:57.593 18:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:57.593 18:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:57.593 18:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:57.593 18:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.161 nvme0n1 00:31:58.161 18:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:58.161 18:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:58.161 18:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:58.161 18:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:58.161 18:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.161 18:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:58.161 18:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:58.161 18:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:58.161 18:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:58.161 18:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.161 18:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:58.161 18:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:31:58.161 18:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:58.161 18:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:58.161 18:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:58.161 18:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:58.161 18:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTgxMGQ0NDQzNTQzYWU5OTUyZGIwOWQ3MzBiYjc4OTNlYTMzOTE1NmZjNjYyMDlixkJ1jw==: 00:31:58.161 18:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjQ4MGMyYjhlZDU5YjlhZjJiYmQ3ZDFlOWVlNDA3ZDViNjIwZGU4MjAyZTBkZjJm1ti+3A==: 00:31:58.161 18:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:58.161 18:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:58.161 18:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTgxMGQ0NDQzNTQzYWU5OTUyZGIwOWQ3MzBiYjc4OTNlYTMzOTE1NmZjNjYyMDlixkJ1jw==: 00:31:58.161 18:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjQ4MGMyYjhlZDU5YjlhZjJiYmQ3ZDFlOWVlNDA3ZDViNjIwZGU4MjAyZTBkZjJm1ti+3A==: ]] 00:31:58.162 18:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjQ4MGMyYjhlZDU5YjlhZjJiYmQ3ZDFlOWVlNDA3ZDViNjIwZGU4MjAyZTBkZjJm1ti+3A==: 00:31:58.162 18:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:58.162 18:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:58.162 18:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.162 18:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:58.162 18:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:31:58.162 18:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:58.162 18:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:58.162 18:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:58.162 18:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:58.162 18:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:58.162 18:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:58.162 18:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:58.162 18:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:58.162 18:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:58.162 18:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:58.162 18:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:31:58.162 18:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:31:58.162 18:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:31:58.162 18:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:31:58.162 18:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:58.162 18:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:31:58.162 18:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:58.162 18:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:31:58.162 18:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:58.162 18:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.162 2024/07/22 18:39:10 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:31:58.162 request: 00:31:58.162 { 00:31:58.162 "method": "bdev_nvme_attach_controller", 00:31:58.162 "params": { 00:31:58.162 "name": "nvme0", 00:31:58.162 "trtype": "tcp", 00:31:58.162 "traddr": "10.0.0.1", 00:31:58.162 "adrfam": "ipv4", 00:31:58.162 "trsvcid": "4420", 00:31:58.162 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:31:58.162 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:31:58.162 "prchk_reftag": false, 00:31:58.162 "prchk_guard": false, 00:31:58.162 "hdgst": false, 00:31:58.162 "ddgst": false 00:31:58.162 } 00:31:58.162 } 00:31:58.162 Got JSON-RPC error response 00:31:58.162 GoRPCClient: error on JSON-RPC call 00:31:58.162 18:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:31:58.162 18:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:31:58.162 18:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:58.162 18:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:58.162 18:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:58.162 18:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:31:58.162 18:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:31:58.162 18:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:58.162 18:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.162 18:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:58.162 18:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:31:58.162 18:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:31:58.162 18:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:58.162 18:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:58.162 18:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:58.162 18:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:58.162 18:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:58.162 18:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:58.162 18:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:58.162 18:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:58.162 18:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:58.162 18:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:58.162 18:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:31:58.162 18:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:31:58.162 18:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:31:58.162 18:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:31:58.162 18:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:58.162 18:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:31:58.162 18:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:58.162 18:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:31:58.162 18:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:58.162 18:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.162 2024/07/22 18:39:10 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_key:key2 hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:31:58.162 request: 00:31:58.162 { 00:31:58.162 "method": "bdev_nvme_attach_controller", 00:31:58.162 "params": { 00:31:58.162 "name": "nvme0", 00:31:58.162 "trtype": "tcp", 00:31:58.162 "traddr": "10.0.0.1", 00:31:58.162 "adrfam": "ipv4", 00:31:58.162 "trsvcid": "4420", 00:31:58.162 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:31:58.162 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:31:58.162 "prchk_reftag": false, 00:31:58.162 "prchk_guard": false, 00:31:58.162 "hdgst": false, 00:31:58.162 "ddgst": false, 00:31:58.162 "dhchap_key": "key2" 00:31:58.162 } 00:31:58.162 } 00:31:58.162 Got JSON-RPC error response 00:31:58.162 GoRPCClient: error on JSON-RPC call 00:31:58.162 18:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:31:58.162 18:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:31:58.162 18:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:58.162 18:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:58.162 18:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:58.162 18:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:31:58.162 18:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:31:58.162 18:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:58.162 18:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.162 18:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:58.421 18:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:31:58.421 18:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:31:58.421 18:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:58.421 18:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:58.421 18:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:58.421 18:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:58.421 18:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:58.421 18:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:58.421 18:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:58.421 18:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:58.421 18:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:58.421 18:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:58.421 18:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:31:58.421 18:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:31:58.421 18:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:31:58.421 18:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:31:58.421 18:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:58.421 18:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:31:58.421 18:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:58.421 18:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:31:58.421 18:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:58.421 18:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.421 2024/07/22 18:39:10 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey2 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:31:58.421 request: 00:31:58.421 { 00:31:58.421 "method": "bdev_nvme_attach_controller", 00:31:58.421 "params": { 00:31:58.421 "name": "nvme0", 00:31:58.421 "trtype": "tcp", 00:31:58.421 "traddr": "10.0.0.1", 00:31:58.421 "adrfam": "ipv4", 00:31:58.421 "trsvcid": "4420", 00:31:58.421 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:31:58.421 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:31:58.421 "prchk_reftag": false, 00:31:58.421 "prchk_guard": false, 00:31:58.421 "hdgst": false, 00:31:58.421 "ddgst": false, 00:31:58.421 "dhchap_key": "key1", 00:31:58.421 "dhchap_ctrlr_key": "ckey2" 00:31:58.421 } 00:31:58.421 } 00:31:58.421 Got JSON-RPC error response 00:31:58.421 GoRPCClient: error on JSON-RPC call 00:31:58.421 18:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:31:58.421 18:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:31:58.421 18:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:58.421 18:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:58.421 18:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:58.421 18:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:31:58.421 18:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:31:58.422 18:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:31:58.422 18:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:58.422 18:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:31:58.422 18:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:58.422 18:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:31:58.422 18:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:58.422 18:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:58.422 rmmod nvme_tcp 00:31:58.422 rmmod nvme_fabrics 00:31:58.422 18:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:58.422 18:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:31:58.422 18:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:31:58.422 18:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 102760 ']' 00:31:58.422 18:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 102760 00:31:58.422 18:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 102760 ']' 00:31:58.422 18:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 102760 00:31:58.422 18:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:31:58.422 18:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:58.422 18:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 102760 00:31:58.422 18:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:58.422 18:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:58.422 killing process with pid 102760 00:31:58.422 18:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 102760' 00:31:58.422 18:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 102760 00:31:58.422 18:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 102760 00:31:59.799 18:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:59.799 18:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:59.799 18:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:59.799 18:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:59.799 18:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:59.799 18:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:59.799 18:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:59.799 18:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:59.799 18:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:31:59.799 18:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:31:59.799 18:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:31:59.799 18:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:31:59.799 18:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:31:59.799 18:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:31:59.799 18:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:31:59.799 18:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:31:59.799 18:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:31:59.799 18:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:31:59.799 18:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:31:59.799 18:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:31:59.799 18:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:32:00.408 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:00.408 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:32:00.408 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:32:00.408 18:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.eY3 /tmp/spdk.key-null.sAP /tmp/spdk.key-sha256.yxx /tmp/spdk.key-sha384.Zq9 /tmp/spdk.key-sha512.Rnb /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:32:00.408 18:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:32:00.975 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:00.975 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:32:00.975 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:32:00.975 00:32:00.975 real 0m36.927s 00:32:00.975 user 0m32.842s 00:32:00.975 sys 0m4.170s 00:32:00.975 18:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:00.975 ************************************ 00:32:00.975 END TEST nvmf_auth_host 00:32:00.975 ************************************ 00:32:00.975 18:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.975 18:39:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:32:00.975 18:39:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:32:00.975 18:39:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:32:00.975 18:39:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:32:00.975 18:39:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:00.975 18:39:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.975 ************************************ 00:32:00.975 START TEST nvmf_digest 00:32:00.975 ************************************ 00:32:00.975 18:39:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:32:00.975 * Looking for test storage... 00:32:00.975 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:32:00.975 18:39:12 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:32:00.975 18:39:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:32:00.975 18:39:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:00.975 18:39:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:00.975 18:39:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:00.975 18:39:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:00.975 18:39:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:00.975 18:39:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:00.975 18:39:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:00.975 18:39:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:00.975 18:39:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:00.975 18:39:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:00.975 18:39:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:32:00.976 18:39:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=0b8484e2-e129-4a11-8748-0b3c728771da 00:32:00.976 18:39:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:00.976 18:39:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:00.976 18:39:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:32:00.976 18:39:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:00.976 18:39:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:32:00.976 18:39:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:00.976 18:39:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:00.976 18:39:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:00.976 18:39:12 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:00.976 18:39:12 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:00.976 18:39:12 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:00.976 18:39:12 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:32:00.976 18:39:12 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:00.976 18:39:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:32:00.976 18:39:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:00.976 18:39:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:01.235 18:39:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:01.235 18:39:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:01.235 18:39:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:01.235 18:39:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:01.235 18:39:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:01.235 18:39:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:01.235 18:39:12 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:32:01.235 18:39:12 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:32:01.235 18:39:12 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:32:01.235 18:39:12 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:32:01.235 18:39:12 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:32:01.235 18:39:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:01.235 18:39:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:01.235 18:39:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:01.235 18:39:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:01.235 18:39:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:01.235 18:39:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:01.235 18:39:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:01.235 18:39:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:01.235 18:39:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:32:01.235 18:39:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:32:01.235 18:39:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:32:01.235 18:39:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:32:01.235 18:39:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:32:01.235 18:39:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # nvmf_veth_init 00:32:01.235 18:39:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:01.235 18:39:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:01.235 18:39:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:32:01.235 18:39:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:32:01.235 18:39:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:32:01.235 18:39:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:32:01.235 18:39:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:32:01.235 18:39:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:01.235 18:39:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:32:01.235 18:39:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:32:01.235 18:39:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:32:01.235 18:39:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:32:01.235 18:39:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:32:01.235 18:39:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:32:01.235 Cannot find device "nvmf_tgt_br" 00:32:01.235 18:39:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@155 -- # true 00:32:01.235 18:39:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:32:01.235 Cannot find device "nvmf_tgt_br2" 00:32:01.235 18:39:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@156 -- # true 00:32:01.235 18:39:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:32:01.235 18:39:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:32:01.235 Cannot find device "nvmf_tgt_br" 00:32:01.235 18:39:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@158 -- # true 00:32:01.235 18:39:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:32:01.235 Cannot find device "nvmf_tgt_br2" 00:32:01.235 18:39:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@159 -- # true 00:32:01.235 18:39:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:32:01.235 18:39:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:32:01.235 18:39:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:32:01.235 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:32:01.235 18:39:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # true 00:32:01.235 18:39:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:32:01.235 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:32:01.235 18:39:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # true 00:32:01.235 18:39:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:32:01.235 18:39:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:32:01.235 18:39:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:32:01.235 18:39:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:32:01.235 18:39:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:32:01.235 18:39:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:32:01.235 18:39:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:32:01.235 18:39:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:32:01.235 18:39:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:32:01.235 18:39:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:32:01.235 18:39:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:32:01.235 18:39:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:32:01.235 18:39:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:32:01.235 18:39:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:32:01.235 18:39:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:32:01.494 18:39:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:32:01.494 18:39:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:32:01.494 18:39:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:32:01.494 18:39:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:32:01.494 18:39:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:32:01.494 18:39:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:32:01.494 18:39:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:32:01.494 18:39:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:32:01.494 18:39:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:32:01.494 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:01.494 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.089 ms 00:32:01.494 00:32:01.494 --- 10.0.0.2 ping statistics --- 00:32:01.494 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:01.494 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:32:01.494 18:39:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:32:01.494 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:32:01.494 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms 00:32:01.494 00:32:01.494 --- 10.0.0.3 ping statistics --- 00:32:01.494 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:01.494 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:32:01.494 18:39:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:32:01.494 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:01.494 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:32:01.494 00:32:01.494 --- 10.0.0.1 ping statistics --- 00:32:01.494 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:01.494 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:32:01.494 18:39:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:01.494 18:39:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@433 -- # return 0 00:32:01.494 18:39:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:01.494 18:39:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:01.494 18:39:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:01.494 18:39:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:01.494 18:39:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:01.494 18:39:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:01.494 18:39:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:01.494 18:39:13 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:32:01.494 18:39:13 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:32:01.494 18:39:13 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:32:01.494 18:39:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:32:01.494 18:39:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:01.494 18:39:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:32:01.494 ************************************ 00:32:01.494 START TEST nvmf_digest_clean 00:32:01.494 ************************************ 00:32:01.494 18:39:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:32:01.494 18:39:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:32:01.494 18:39:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:32:01.494 18:39:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:32:01.494 18:39:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:32:01.494 18:39:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:32:01.494 18:39:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:01.494 18:39:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:01.494 18:39:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:01.494 18:39:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=104356 00:32:01.494 18:39:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:32:01.494 18:39:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 104356 00:32:01.494 18:39:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 104356 ']' 00:32:01.494 18:39:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:01.494 18:39:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:01.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:01.494 18:39:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:01.494 18:39:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:01.494 18:39:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:01.494 [2024-07-22 18:39:13.496017] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:32:01.494 [2024-07-22 18:39:13.496193] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:01.752 [2024-07-22 18:39:13.664833] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:02.011 [2024-07-22 18:39:13.925641] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:02.011 [2024-07-22 18:39:13.925732] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:02.011 [2024-07-22 18:39:13.925749] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:02.011 [2024-07-22 18:39:13.925765] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:02.011 [2024-07-22 18:39:13.925776] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:02.011 [2024-07-22 18:39:13.925834] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:02.578 18:39:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:02.578 18:39:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:32:02.578 18:39:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:02.578 18:39:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:02.578 18:39:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:02.578 18:39:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:02.578 18:39:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:32:02.578 18:39:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:32:02.578 18:39:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:32:02.578 18:39:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:02.578 18:39:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:02.836 null0 00:32:02.836 [2024-07-22 18:39:14.839922] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:03.095 [2024-07-22 18:39:14.864224] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:03.095 18:39:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:03.095 18:39:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:32:03.095 18:39:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:32:03.095 18:39:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:32:03.095 18:39:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:32:03.095 18:39:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:32:03.095 18:39:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:32:03.095 18:39:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:32:03.095 18:39:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=104406 00:32:03.095 18:39:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 104406 /var/tmp/bperf.sock 00:32:03.095 18:39:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:32:03.095 18:39:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 104406 ']' 00:32:03.095 18:39:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:03.095 18:39:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:03.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:03.095 18:39:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:03.095 18:39:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:03.095 18:39:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:03.095 [2024-07-22 18:39:14.989285] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:32:03.095 [2024-07-22 18:39:14.989472] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104406 ] 00:32:03.354 [2024-07-22 18:39:15.168894] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:03.613 [2024-07-22 18:39:15.480946] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:04.181 18:39:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:04.181 18:39:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:32:04.181 18:39:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:32:04.181 18:39:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:32:04.181 18:39:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:04.805 18:39:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:04.805 18:39:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:05.063 nvme0n1 00:32:05.063 18:39:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:32:05.063 18:39:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:05.063 Running I/O for 2 seconds... 00:32:06.965 00:32:06.965 Latency(us) 00:32:06.965 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:06.965 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:32:06.965 nvme0n1 : 2.01 14940.67 58.36 0.00 0.00 8555.36 4766.25 16562.73 00:32:06.965 =================================================================================================================== 00:32:06.965 Total : 14940.67 58.36 0.00 0.00 8555.36 4766.25 16562.73 00:32:06.965 0 00:32:06.965 18:39:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:32:06.965 18:39:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:32:06.965 18:39:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:32:06.965 18:39:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:32:06.965 18:39:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:32:06.965 | select(.opcode=="crc32c") 00:32:06.965 | "\(.module_name) \(.executed)"' 00:32:07.530 18:39:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:32:07.530 18:39:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:32:07.530 18:39:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:32:07.530 18:39:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:07.530 18:39:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 104406 00:32:07.530 18:39:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 104406 ']' 00:32:07.530 18:39:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 104406 00:32:07.530 18:39:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:32:07.530 18:39:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:07.530 18:39:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 104406 00:32:07.530 18:39:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:32:07.530 18:39:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:32:07.531 killing process with pid 104406 00:32:07.531 18:39:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 104406' 00:32:07.531 18:39:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 104406 00:32:07.531 Received shutdown signal, test time was about 2.000000 seconds 00:32:07.531 00:32:07.531 Latency(us) 00:32:07.531 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:07.531 =================================================================================================================== 00:32:07.531 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:07.531 18:39:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 104406 00:32:08.465 18:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:32:08.465 18:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:32:08.465 18:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:32:08.465 18:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:32:08.465 18:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:32:08.465 18:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:32:08.465 18:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:32:08.465 18:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=104519 00:32:08.465 18:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 104519 /var/tmp/bperf.sock 00:32:08.465 18:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:32:08.465 18:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 104519 ']' 00:32:08.465 18:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:08.465 18:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:08.465 18:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:08.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:08.465 18:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:08.465 18:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:08.465 [2024-07-22 18:39:20.472366] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:32:08.465 [2024-07-22 18:39:20.472883] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104519 ] 00:32:08.465 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:08.465 Zero copy mechanism will not be used. 00:32:08.724 [2024-07-22 18:39:20.656438] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:08.981 [2024-07-22 18:39:20.966128] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:09.546 18:39:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:09.546 18:39:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:32:09.546 18:39:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:32:09.546 18:39:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:32:09.546 18:39:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:10.111 18:39:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:10.111 18:39:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:10.369 nvme0n1 00:32:10.369 18:39:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:32:10.369 18:39:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:10.626 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:10.626 Zero copy mechanism will not be used. 00:32:10.626 Running I/O for 2 seconds... 00:32:12.520 00:32:12.520 Latency(us) 00:32:12.520 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:12.520 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:32:12.520 nvme0n1 : 2.00 5952.95 744.12 0.00 0.00 2683.07 726.11 7536.64 00:32:12.520 =================================================================================================================== 00:32:12.520 Total : 5952.95 744.12 0.00 0.00 2683.07 726.11 7536.64 00:32:12.520 0 00:32:12.520 18:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:32:12.520 18:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:32:12.520 18:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:32:12.520 18:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:32:12.520 18:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:32:12.520 | select(.opcode=="crc32c") 00:32:12.520 | "\(.module_name) \(.executed)"' 00:32:12.780 18:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:32:12.780 18:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:32:12.780 18:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:32:12.780 18:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:12.780 18:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 104519 00:32:12.780 18:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 104519 ']' 00:32:12.780 18:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 104519 00:32:12.780 18:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:32:12.780 18:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:12.780 18:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 104519 00:32:13.038 18:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:32:13.038 18:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:32:13.038 killing process with pid 104519 00:32:13.038 18:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 104519' 00:32:13.038 Received shutdown signal, test time was about 2.000000 seconds 00:32:13.038 00:32:13.038 Latency(us) 00:32:13.038 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:13.038 =================================================================================================================== 00:32:13.038 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:13.038 18:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 104519 00:32:13.038 18:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 104519 00:32:14.414 18:39:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:32:14.414 18:39:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:32:14.414 18:39:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:32:14.414 18:39:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:32:14.414 18:39:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:32:14.414 18:39:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:32:14.414 18:39:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:32:14.414 18:39:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:32:14.414 18:39:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=104618 00:32:14.414 18:39:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 104618 /var/tmp/bperf.sock 00:32:14.414 18:39:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 104618 ']' 00:32:14.414 18:39:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:14.414 18:39:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:14.414 18:39:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:14.414 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:14.414 18:39:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:14.414 18:39:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:14.414 [2024-07-22 18:39:26.210874] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:32:14.414 [2024-07-22 18:39:26.211092] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104618 ] 00:32:14.414 [2024-07-22 18:39:26.390102] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:14.672 [2024-07-22 18:39:26.657157] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:15.238 18:39:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:15.238 18:39:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:32:15.238 18:39:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:32:15.238 18:39:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:32:15.238 18:39:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:15.805 18:39:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:15.805 18:39:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:16.063 nvme0n1 00:32:16.335 18:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:32:16.335 18:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:16.335 Running I/O for 2 seconds... 00:32:18.260 00:32:18.260 Latency(us) 00:32:18.260 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:18.260 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:18.260 nvme0n1 : 2.00 17018.54 66.48 0.00 0.00 7512.83 3664.06 17277.67 00:32:18.260 =================================================================================================================== 00:32:18.260 Total : 17018.54 66.48 0.00 0.00 7512.83 3664.06 17277.67 00:32:18.260 0 00:32:18.260 18:39:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:32:18.260 18:39:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:32:18.260 18:39:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:32:18.260 18:39:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:32:18.260 18:39:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:32:18.260 | select(.opcode=="crc32c") 00:32:18.260 | "\(.module_name) \(.executed)"' 00:32:18.825 18:39:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:32:18.825 18:39:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:32:18.825 18:39:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:32:18.825 18:39:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:18.825 18:39:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 104618 00:32:18.825 18:39:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 104618 ']' 00:32:18.825 18:39:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 104618 00:32:18.825 18:39:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:32:18.825 18:39:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:18.825 18:39:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 104618 00:32:18.825 killing process with pid 104618 00:32:18.825 Received shutdown signal, test time was about 2.000000 seconds 00:32:18.825 00:32:18.825 Latency(us) 00:32:18.825 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:18.825 =================================================================================================================== 00:32:18.825 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:18.825 18:39:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:32:18.825 18:39:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:32:18.825 18:39:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 104618' 00:32:18.825 18:39:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 104618 00:32:18.825 18:39:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 104618 00:32:19.758 18:39:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:32:19.759 18:39:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:32:19.759 18:39:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:32:19.759 18:39:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:32:19.759 18:39:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:32:19.759 18:39:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:32:19.759 18:39:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:32:19.759 18:39:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=104728 00:32:19.759 18:39:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:32:19.759 18:39:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 104728 /var/tmp/bperf.sock 00:32:19.759 18:39:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 104728 ']' 00:32:19.759 18:39:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:19.759 18:39:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:19.759 18:39:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:19.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:19.759 18:39:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:19.759 18:39:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:19.759 [2024-07-22 18:39:31.729229] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:32:19.759 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:19.759 Zero copy mechanism will not be used. 00:32:19.759 [2024-07-22 18:39:31.729407] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104728 ] 00:32:20.016 [2024-07-22 18:39:31.900472] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:20.274 [2024-07-22 18:39:32.175898] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:20.844 18:39:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:20.844 18:39:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:32:20.844 18:39:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:32:20.844 18:39:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:32:20.844 18:39:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:21.409 18:39:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:21.409 18:39:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:21.666 nvme0n1 00:32:21.666 18:39:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:32:21.666 18:39:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:21.924 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:21.924 Zero copy mechanism will not be used. 00:32:21.924 Running I/O for 2 seconds... 00:32:23.822 00:32:23.822 Latency(us) 00:32:23.822 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:23.822 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:32:23.822 nvme0n1 : 2.00 5363.49 670.44 0.00 0.00 2974.17 2100.13 5451.40 00:32:23.822 =================================================================================================================== 00:32:23.822 Total : 5363.49 670.44 0.00 0.00 2974.17 2100.13 5451.40 00:32:23.822 0 00:32:23.822 18:39:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:32:23.822 18:39:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:32:23.822 18:39:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:32:23.822 18:39:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:32:23.822 | select(.opcode=="crc32c") 00:32:23.822 | "\(.module_name) \(.executed)"' 00:32:23.822 18:39:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:32:24.080 18:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:32:24.080 18:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:32:24.080 18:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:32:24.080 18:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:24.080 18:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 104728 00:32:24.080 18:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 104728 ']' 00:32:24.080 18:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 104728 00:32:24.080 18:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:32:24.080 18:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:24.080 18:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 104728 00:32:24.338 killing process with pid 104728 00:32:24.338 Received shutdown signal, test time was about 2.000000 seconds 00:32:24.338 00:32:24.338 Latency(us) 00:32:24.338 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:24.338 =================================================================================================================== 00:32:24.338 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:24.338 18:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:32:24.339 18:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:32:24.339 18:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 104728' 00:32:24.339 18:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 104728 00:32:24.339 18:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 104728 00:32:25.714 18:39:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 104356 00:32:25.714 18:39:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 104356 ']' 00:32:25.714 18:39:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 104356 00:32:25.714 18:39:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:32:25.714 18:39:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:25.714 18:39:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 104356 00:32:25.714 killing process with pid 104356 00:32:25.714 18:39:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:25.714 18:39:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:25.714 18:39:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 104356' 00:32:25.714 18:39:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 104356 00:32:25.714 18:39:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 104356 00:32:27.088 ************************************ 00:32:27.088 END TEST nvmf_digest_clean 00:32:27.088 ************************************ 00:32:27.088 00:32:27.088 real 0m25.382s 00:32:27.088 user 0m47.724s 00:32:27.088 sys 0m5.150s 00:32:27.088 18:39:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:27.088 18:39:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:27.088 18:39:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:32:27.088 18:39:38 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:32:27.088 18:39:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:32:27.088 18:39:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:27.088 18:39:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:32:27.088 ************************************ 00:32:27.088 START TEST nvmf_digest_error 00:32:27.088 ************************************ 00:32:27.088 18:39:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:32:27.088 18:39:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:32:27.088 18:39:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:27.088 18:39:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:27.089 18:39:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:27.089 18:39:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=104867 00:32:27.089 18:39:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 104867 00:32:27.089 18:39:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:32:27.089 18:39:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 104867 ']' 00:32:27.089 18:39:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:27.089 18:39:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:27.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:27.089 18:39:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:27.089 18:39:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:27.089 18:39:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:27.089 [2024-07-22 18:39:38.940904] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:32:27.089 [2024-07-22 18:39:38.941061] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:27.347 [2024-07-22 18:39:39.112266] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:27.347 [2024-07-22 18:39:39.362279] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:27.347 [2024-07-22 18:39:39.362358] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:27.347 [2024-07-22 18:39:39.362377] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:27.347 [2024-07-22 18:39:39.362393] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:27.347 [2024-07-22 18:39:39.362405] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:27.347 [2024-07-22 18:39:39.362464] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:27.945 18:39:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:27.945 18:39:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:32:27.945 18:39:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:27.945 18:39:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:27.945 18:39:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:27.945 18:39:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:27.945 18:39:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:32:27.945 18:39:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.945 18:39:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:27.945 [2024-07-22 18:39:39.943593] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:32:27.945 18:39:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.945 18:39:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:32:27.945 18:39:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:32:27.945 18:39:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.945 18:39:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:28.512 null0 00:32:28.512 [2024-07-22 18:39:40.288750] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:28.512 [2024-07-22 18:39:40.312976] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:28.512 18:39:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:28.512 18:39:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:32:28.512 18:39:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:32:28.512 18:39:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:32:28.512 18:39:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:32:28.512 18:39:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:32:28.512 18:39:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=104911 00:32:28.512 18:39:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 104911 /var/tmp/bperf.sock 00:32:28.512 18:39:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:32:28.512 18:39:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 104911 ']' 00:32:28.512 18:39:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:28.512 18:39:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:28.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:28.512 18:39:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:28.512 18:39:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:28.512 18:39:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:28.512 [2024-07-22 18:39:40.435260] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:32:28.512 [2024-07-22 18:39:40.435485] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104911 ] 00:32:28.770 [2024-07-22 18:39:40.614273] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:29.029 [2024-07-22 18:39:40.919746] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:29.595 18:39:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:29.595 18:39:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:32:29.595 18:39:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:29.595 18:39:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:29.595 18:39:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:32:29.595 18:39:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.595 18:39:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:29.595 18:39:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.595 18:39:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:29.595 18:39:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:30.162 nvme0n1 00:32:30.162 18:39:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:32:30.162 18:39:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.162 18:39:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:30.162 18:39:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.162 18:39:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:32:30.162 18:39:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:30.162 Running I/O for 2 seconds... 00:32:30.162 [2024-07-22 18:39:42.054464] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:30.162 [2024-07-22 18:39:42.054578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7357 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.162 [2024-07-22 18:39:42.054602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.162 [2024-07-22 18:39:42.071904] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:30.162 [2024-07-22 18:39:42.071975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14870 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.162 [2024-07-22 18:39:42.072012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.162 [2024-07-22 18:39:42.088285] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:30.162 [2024-07-22 18:39:42.088356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20950 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.162 [2024-07-22 18:39:42.088393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.162 [2024-07-22 18:39:42.105156] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:30.162 [2024-07-22 18:39:42.105237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1945 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.162 [2024-07-22 18:39:42.105258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.162 [2024-07-22 18:39:42.121679] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:30.162 [2024-07-22 18:39:42.121752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24554 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.162 [2024-07-22 18:39:42.121794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.162 [2024-07-22 18:39:42.137852] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:30.162 [2024-07-22 18:39:42.137935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22767 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.162 [2024-07-22 18:39:42.137972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.162 [2024-07-22 18:39:42.154084] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:30.162 [2024-07-22 18:39:42.154141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20036 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.162 [2024-07-22 18:39:42.154162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.162 [2024-07-22 18:39:42.170987] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:30.162 [2024-07-22 18:39:42.171061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5535 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.162 [2024-07-22 18:39:42.171081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.421 [2024-07-22 18:39:42.188065] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:30.421 [2024-07-22 18:39:42.188137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:2147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.421 [2024-07-22 18:39:42.188173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.421 [2024-07-22 18:39:42.204519] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:30.421 [2024-07-22 18:39:42.204587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:444 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.421 [2024-07-22 18:39:42.204623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.421 [2024-07-22 18:39:42.221470] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:30.421 [2024-07-22 18:39:42.221527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:8042 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.421 [2024-07-22 18:39:42.221547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.421 [2024-07-22 18:39:42.238597] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:30.421 [2024-07-22 18:39:42.238683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:8916 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.421 [2024-07-22 18:39:42.238719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.421 [2024-07-22 18:39:42.255458] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:30.421 [2024-07-22 18:39:42.255531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:22157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.421 [2024-07-22 18:39:42.255567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.421 [2024-07-22 18:39:42.272408] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:30.421 [2024-07-22 18:39:42.272464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:25566 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.422 [2024-07-22 18:39:42.272484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.422 [2024-07-22 18:39:42.289406] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:30.422 [2024-07-22 18:39:42.289477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22148 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.422 [2024-07-22 18:39:42.289497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.422 [2024-07-22 18:39:42.305941] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:30.422 [2024-07-22 18:39:42.305997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:17455 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.422 [2024-07-22 18:39:42.306026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.422 [2024-07-22 18:39:42.323186] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:30.422 [2024-07-22 18:39:42.323279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:7163 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.422 [2024-07-22 18:39:42.323300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.422 [2024-07-22 18:39:42.341049] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:30.422 [2024-07-22 18:39:42.341106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:6500 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.422 [2024-07-22 18:39:42.341126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.422 [2024-07-22 18:39:42.359203] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:30.422 [2024-07-22 18:39:42.359262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18685 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.422 [2024-07-22 18:39:42.359283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.422 [2024-07-22 18:39:42.377092] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:30.422 [2024-07-22 18:39:42.377150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5208 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.422 [2024-07-22 18:39:42.377171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.422 [2024-07-22 18:39:42.394446] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:30.422 [2024-07-22 18:39:42.394503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19823 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.422 [2024-07-22 18:39:42.394524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.422 [2024-07-22 18:39:42.411737] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:30.422 [2024-07-22 18:39:42.411809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1293 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.422 [2024-07-22 18:39:42.411843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.422 [2024-07-22 18:39:42.428807] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:30.422 [2024-07-22 18:39:42.428890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14962 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.422 [2024-07-22 18:39:42.428911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.681 [2024-07-22 18:39:42.445767] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:30.681 [2024-07-22 18:39:42.445824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11685 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.681 [2024-07-22 18:39:42.445858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.681 [2024-07-22 18:39:42.463159] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:30.681 [2024-07-22 18:39:42.463233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:17630 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.681 [2024-07-22 18:39:42.463254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.681 [2024-07-22 18:39:42.480043] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:30.681 [2024-07-22 18:39:42.480099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:10314 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.681 [2024-07-22 18:39:42.480119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.681 [2024-07-22 18:39:42.500548] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:30.681 [2024-07-22 18:39:42.500691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:6828 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.681 [2024-07-22 18:39:42.500730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.681 [2024-07-22 18:39:42.520705] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:30.681 [2024-07-22 18:39:42.520815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:10273 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.681 [2024-07-22 18:39:42.520861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.681 [2024-07-22 18:39:42.540082] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:30.681 [2024-07-22 18:39:42.540177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2923 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.681 [2024-07-22 18:39:42.540218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.681 [2024-07-22 18:39:42.558869] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:30.681 [2024-07-22 18:39:42.558947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13625 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.681 [2024-07-22 18:39:42.558984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.681 [2024-07-22 18:39:42.575571] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:30.681 [2024-07-22 18:39:42.575636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:5033 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.681 [2024-07-22 18:39:42.575662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.681 [2024-07-22 18:39:42.594759] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:30.681 [2024-07-22 18:39:42.594866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:23424 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.681 [2024-07-22 18:39:42.594893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.681 [2024-07-22 18:39:42.613557] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:30.681 [2024-07-22 18:39:42.613636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:1898 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.681 [2024-07-22 18:39:42.613697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.681 [2024-07-22 18:39:42.632818] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:30.681 [2024-07-22 18:39:42.632894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5291 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.681 [2024-07-22 18:39:42.632920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.681 [2024-07-22 18:39:42.653803] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:30.681 [2024-07-22 18:39:42.653889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:12179 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.681 [2024-07-22 18:39:42.653915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.681 [2024-07-22 18:39:42.674365] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:30.681 [2024-07-22 18:39:42.674438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13611 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.681 [2024-07-22 18:39:42.674463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.681 [2024-07-22 18:39:42.693333] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:30.681 [2024-07-22 18:39:42.693424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:11870 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.681 [2024-07-22 18:39:42.693449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.941 [2024-07-22 18:39:42.713148] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:30.941 [2024-07-22 18:39:42.713209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:18461 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.941 [2024-07-22 18:39:42.713235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.941 [2024-07-22 18:39:42.732151] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:30.941 [2024-07-22 18:39:42.732212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:11974 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.941 [2024-07-22 18:39:42.732238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.941 [2024-07-22 18:39:42.753275] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:30.941 [2024-07-22 18:39:42.753386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:17977 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.941 [2024-07-22 18:39:42.753413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.941 [2024-07-22 18:39:42.769511] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:30.941 [2024-07-22 18:39:42.769576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:15099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.941 [2024-07-22 18:39:42.769603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.941 [2024-07-22 18:39:42.786692] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:30.941 [2024-07-22 18:39:42.786751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20418 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.941 [2024-07-22 18:39:42.786771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.941 [2024-07-22 18:39:42.804379] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:30.941 [2024-07-22 18:39:42.804442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:2510 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.941 [2024-07-22 18:39:42.804461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.941 [2024-07-22 18:39:42.821376] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:30.941 [2024-07-22 18:39:42.821451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:11635 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.941 [2024-07-22 18:39:42.821476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.941 [2024-07-22 18:39:42.839412] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:30.941 [2024-07-22 18:39:42.839468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:21284 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.941 [2024-07-22 18:39:42.839488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.941 [2024-07-22 18:39:42.857026] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:30.941 [2024-07-22 18:39:42.857076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:12406 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.941 [2024-07-22 18:39:42.857095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.941 [2024-07-22 18:39:42.873477] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:30.941 [2024-07-22 18:39:42.873551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:11110 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.941 [2024-07-22 18:39:42.873584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.941 [2024-07-22 18:39:42.891562] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:30.941 [2024-07-22 18:39:42.891612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:15694 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.941 [2024-07-22 18:39:42.891631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.941 [2024-07-22 18:39:42.909264] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:30.941 [2024-07-22 18:39:42.909312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:4589 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.941 [2024-07-22 18:39:42.909331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.941 [2024-07-22 18:39:42.925576] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:30.941 [2024-07-22 18:39:42.925627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:15819 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.941 [2024-07-22 18:39:42.925647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.941 [2024-07-22 18:39:42.943192] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:30.941 [2024-07-22 18:39:42.943258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:25209 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.941 [2024-07-22 18:39:42.943292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.201 [2024-07-22 18:39:42.960348] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:31.201 [2024-07-22 18:39:42.960410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:20980 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.201 [2024-07-22 18:39:42.960428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.201 [2024-07-22 18:39:42.978373] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:31.201 [2024-07-22 18:39:42.978423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:409 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.201 [2024-07-22 18:39:42.978442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.201 [2024-07-22 18:39:42.996428] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:31.201 [2024-07-22 18:39:42.996478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:9214 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.201 [2024-07-22 18:39:42.996498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.201 [2024-07-22 18:39:43.011627] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:31.201 [2024-07-22 18:39:43.011679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:17442 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.201 [2024-07-22 18:39:43.011710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.201 [2024-07-22 18:39:43.029727] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:31.201 [2024-07-22 18:39:43.029786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:2874 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.201 [2024-07-22 18:39:43.029805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.201 [2024-07-22 18:39:43.048281] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:31.201 [2024-07-22 18:39:43.048330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:4507 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.201 [2024-07-22 18:39:43.048350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.201 [2024-07-22 18:39:43.065433] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:31.201 [2024-07-22 18:39:43.065484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6657 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.201 [2024-07-22 18:39:43.065515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.201 [2024-07-22 18:39:43.083686] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:31.201 [2024-07-22 18:39:43.083742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1454 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.201 [2024-07-22 18:39:43.083761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.201 [2024-07-22 18:39:43.099031] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:31.201 [2024-07-22 18:39:43.099077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:15018 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.201 [2024-07-22 18:39:43.099096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.201 [2024-07-22 18:39:43.116902] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:31.201 [2024-07-22 18:39:43.116948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:3224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.201 [2024-07-22 18:39:43.116968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.201 [2024-07-22 18:39:43.134657] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:31.201 [2024-07-22 18:39:43.134705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:24435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.201 [2024-07-22 18:39:43.134737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.201 [2024-07-22 18:39:43.150517] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:31.201 [2024-07-22 18:39:43.150582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:3736 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.201 [2024-07-22 18:39:43.150603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.201 [2024-07-22 18:39:43.171139] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:31.201 [2024-07-22 18:39:43.171214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:378 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.201 [2024-07-22 18:39:43.171244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.201 [2024-07-22 18:39:43.191936] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:31.201 [2024-07-22 18:39:43.191986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:610 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.201 [2024-07-22 18:39:43.192006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.201 [2024-07-22 18:39:43.207491] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:31.201 [2024-07-22 18:39:43.207540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:12963 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.201 [2024-07-22 18:39:43.207560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.459 [2024-07-22 18:39:43.225082] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:31.459 [2024-07-22 18:39:43.225162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:14987 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.459 [2024-07-22 18:39:43.225182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.459 [2024-07-22 18:39:43.242049] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:31.459 [2024-07-22 18:39:43.242104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:14993 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.459 [2024-07-22 18:39:43.242125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.459 [2024-07-22 18:39:43.256189] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:31.459 [2024-07-22 18:39:43.256235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:4605 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.459 [2024-07-22 18:39:43.256253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.459 [2024-07-22 18:39:43.274381] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:31.459 [2024-07-22 18:39:43.274443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:22160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.459 [2024-07-22 18:39:43.274467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.459 [2024-07-22 18:39:43.291617] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:31.459 [2024-07-22 18:39:43.291695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:9060 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.459 [2024-07-22 18:39:43.291718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.459 [2024-07-22 18:39:43.308120] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:31.459 [2024-07-22 18:39:43.308170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:8327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.459 [2024-07-22 18:39:43.308190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.459 [2024-07-22 18:39:43.324198] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:31.459 [2024-07-22 18:39:43.324274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:12190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.459 [2024-07-22 18:39:43.324296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.459 [2024-07-22 18:39:43.340525] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:31.459 [2024-07-22 18:39:43.340587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:3307 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.459 [2024-07-22 18:39:43.340609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.459 [2024-07-22 18:39:43.357272] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:31.459 [2024-07-22 18:39:43.357321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:6957 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.459 [2024-07-22 18:39:43.357353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.459 [2024-07-22 18:39:43.376147] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:31.459 [2024-07-22 18:39:43.376332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:21413 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.459 [2024-07-22 18:39:43.376372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.459 [2024-07-22 18:39:43.396881] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:31.459 [2024-07-22 18:39:43.396954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:13990 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.459 [2024-07-22 18:39:43.396992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.459 [2024-07-22 18:39:43.413028] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:31.459 [2024-07-22 18:39:43.413098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15377 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.459 [2024-07-22 18:39:43.413135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.459 [2024-07-22 18:39:43.428885] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:31.459 [2024-07-22 18:39:43.428971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:13858 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.459 [2024-07-22 18:39:43.429008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.459 [2024-07-22 18:39:43.444920] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:31.459 [2024-07-22 18:39:43.444992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:6113 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.459 [2024-07-22 18:39:43.445029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.459 [2024-07-22 18:39:43.460967] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:31.459 [2024-07-22 18:39:43.461038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8107 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.459 [2024-07-22 18:39:43.461074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.718 [2024-07-22 18:39:43.476798] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:31.718 [2024-07-22 18:39:43.476881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18708 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.718 [2024-07-22 18:39:43.476918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.718 [2024-07-22 18:39:43.492454] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:31.718 [2024-07-22 18:39:43.492525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:23726 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.718 [2024-07-22 18:39:43.492562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.718 [2024-07-22 18:39:43.508094] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:31.718 [2024-07-22 18:39:43.508163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:8931 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.718 [2024-07-22 18:39:43.508200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.718 [2024-07-22 18:39:43.523858] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:31.718 [2024-07-22 18:39:43.523956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:17005 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.718 [2024-07-22 18:39:43.523978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.718 [2024-07-22 18:39:43.539548] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:31.718 [2024-07-22 18:39:43.539618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22962 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.718 [2024-07-22 18:39:43.539654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.718 [2024-07-22 18:39:43.556387] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:31.718 [2024-07-22 18:39:43.556459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:7532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.718 [2024-07-22 18:39:43.556496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.718 [2024-07-22 18:39:43.572550] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:31.718 [2024-07-22 18:39:43.572654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:13826 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.718 [2024-07-22 18:39:43.572673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.718 [2024-07-22 18:39:43.588562] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:31.718 [2024-07-22 18:39:43.588649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:22499 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.718 [2024-07-22 18:39:43.588685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.718 [2024-07-22 18:39:43.604859] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:31.718 [2024-07-22 18:39:43.604957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:6748 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.718 [2024-07-22 18:39:43.604978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.718 [2024-07-22 18:39:43.620771] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:31.718 [2024-07-22 18:39:43.620866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:11050 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.718 [2024-07-22 18:39:43.620888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.718 [2024-07-22 18:39:43.636668] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:31.718 [2024-07-22 18:39:43.636739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:24645 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.718 [2024-07-22 18:39:43.636774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.718 [2024-07-22 18:39:43.653319] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:31.719 [2024-07-22 18:39:43.653399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22575 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.719 [2024-07-22 18:39:43.653451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.719 [2024-07-22 18:39:43.668978] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:31.719 [2024-07-22 18:39:43.669048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:7216 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.719 [2024-07-22 18:39:43.669085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.719 [2024-07-22 18:39:43.685150] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:31.719 [2024-07-22 18:39:43.685221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24290 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.719 [2024-07-22 18:39:43.685257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.719 [2024-07-22 18:39:43.702033] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:31.719 [2024-07-22 18:39:43.702090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:1126 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.719 [2024-07-22 18:39:43.702111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.719 [2024-07-22 18:39:43.719310] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:31.719 [2024-07-22 18:39:43.719366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:6309 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.719 [2024-07-22 18:39:43.719387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.977 [2024-07-22 18:39:43.736826] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:31.977 [2024-07-22 18:39:43.736922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:766 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.977 [2024-07-22 18:39:43.736944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.977 [2024-07-22 18:39:43.753759] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:31.977 [2024-07-22 18:39:43.753862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:5567 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.977 [2024-07-22 18:39:43.753884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.977 [2024-07-22 18:39:43.770244] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:31.977 [2024-07-22 18:39:43.770318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:10954 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.977 [2024-07-22 18:39:43.770339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.977 [2024-07-22 18:39:43.786731] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:31.977 [2024-07-22 18:39:43.786802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:21960 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.977 [2024-07-22 18:39:43.786839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.977 [2024-07-22 18:39:43.803058] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:31.977 [2024-07-22 18:39:43.803113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:9029 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.977 [2024-07-22 18:39:43.803133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.978 [2024-07-22 18:39:43.821865] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:31.978 [2024-07-22 18:39:43.821954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:3868 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.978 [2024-07-22 18:39:43.821976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.978 [2024-07-22 18:39:43.837934] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:31.978 [2024-07-22 18:39:43.838005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:7048 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.978 [2024-07-22 18:39:43.838065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.978 [2024-07-22 18:39:43.854794] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:31.978 [2024-07-22 18:39:43.854894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20812 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.978 [2024-07-22 18:39:43.854917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.978 [2024-07-22 18:39:43.872103] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:31.978 [2024-07-22 18:39:43.872175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:8246 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.978 [2024-07-22 18:39:43.872195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.978 [2024-07-22 18:39:43.889168] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:31.978 [2024-07-22 18:39:43.889225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:20103 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.978 [2024-07-22 18:39:43.889245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.978 [2024-07-22 18:39:43.906165] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:31.978 [2024-07-22 18:39:43.906221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:11371 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.978 [2024-07-22 18:39:43.906250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.978 [2024-07-22 18:39:43.922812] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:31.978 [2024-07-22 18:39:43.922894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:21523 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.978 [2024-07-22 18:39:43.922915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.978 [2024-07-22 18:39:43.942398] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:31.978 [2024-07-22 18:39:43.942460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:4527 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.978 [2024-07-22 18:39:43.942481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.978 [2024-07-22 18:39:43.959543] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:31.978 [2024-07-22 18:39:43.959651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:18873 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.978 [2024-07-22 18:39:43.959684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.978 [2024-07-22 18:39:43.979585] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:31.978 [2024-07-22 18:39:43.979696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:10983 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.978 [2024-07-22 18:39:43.979726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:32.236 [2024-07-22 18:39:43.997037] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:32.236 [2024-07-22 18:39:43.997118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13363 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.236 [2024-07-22 18:39:43.997149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:32.236 [2024-07-22 18:39:44.018566] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:32.236 [2024-07-22 18:39:44.018669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:25153 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.236 [2024-07-22 18:39:44.018699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:32.236 00:32:32.236 Latency(us) 00:32:32.236 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:32.236 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:32:32.236 nvme0n1 : 2.01 14608.41 57.06 0.00 0.00 8748.22 4944.99 26333.56 00:32:32.236 =================================================================================================================== 00:32:32.236 Total : 14608.41 57.06 0.00 0.00 8748.22 4944.99 26333.56 00:32:32.236 0 00:32:32.236 18:39:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:32:32.236 18:39:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:32:32.236 18:39:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:32:32.236 18:39:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:32:32.236 | .driver_specific 00:32:32.236 | .nvme_error 00:32:32.236 | .status_code 00:32:32.236 | .command_transient_transport_error' 00:32:32.495 18:39:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 114 > 0 )) 00:32:32.495 18:39:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 104911 00:32:32.495 18:39:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 104911 ']' 00:32:32.495 18:39:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 104911 00:32:32.495 18:39:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:32:32.495 18:39:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:32.495 18:39:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 104911 00:32:32.495 18:39:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:32:32.495 18:39:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:32:32.495 18:39:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 104911' 00:32:32.495 killing process with pid 104911 00:32:32.495 18:39:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 104911 00:32:32.495 Received shutdown signal, test time was about 2.000000 seconds 00:32:32.495 00:32:32.495 Latency(us) 00:32:32.495 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:32.495 =================================================================================================================== 00:32:32.495 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:32.495 18:39:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 104911 00:32:33.430 18:39:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:32:33.430 18:39:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:32:33.430 18:39:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:32:33.430 18:39:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:32:33.430 18:39:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:32:33.430 18:39:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=105007 00:32:33.430 18:39:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:32:33.430 18:39:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 105007 /var/tmp/bperf.sock 00:32:33.430 18:39:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 105007 ']' 00:32:33.430 18:39:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:33.430 18:39:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:33.430 18:39:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:33.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:33.430 18:39:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:33.430 18:39:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:33.688 [2024-07-22 18:39:45.576818] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:32:33.688 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:33.688 Zero copy mechanism will not be used. 00:32:33.688 [2024-07-22 18:39:45.577029] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105007 ] 00:32:33.947 [2024-07-22 18:39:45.754385] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:34.205 [2024-07-22 18:39:46.022726] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:34.464 18:39:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:34.464 18:39:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:32:34.464 18:39:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:34.464 18:39:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:34.722 18:39:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:32:34.722 18:39:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.722 18:39:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:34.722 18:39:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.722 18:39:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:34.722 18:39:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:34.981 nvme0n1 00:32:35.239 18:39:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:32:35.240 18:39:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.240 18:39:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:35.240 18:39:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.240 18:39:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:32:35.240 18:39:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:35.240 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:35.240 Zero copy mechanism will not be used. 00:32:35.240 Running I/O for 2 seconds... 00:32:35.240 [2024-07-22 18:39:47.170162] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:35.240 [2024-07-22 18:39:47.170237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.240 [2024-07-22 18:39:47.170261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:35.240 [2024-07-22 18:39:47.176527] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:35.240 [2024-07-22 18:39:47.176576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.240 [2024-07-22 18:39:47.176596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:35.240 [2024-07-22 18:39:47.182616] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:35.240 [2024-07-22 18:39:47.182681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.240 [2024-07-22 18:39:47.182702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:35.240 [2024-07-22 18:39:47.188805] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:35.240 [2024-07-22 18:39:47.188895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.240 [2024-07-22 18:39:47.188916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.240 [2024-07-22 18:39:47.194973] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:35.240 [2024-07-22 18:39:47.195023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.240 [2024-07-22 18:39:47.195043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:35.240 [2024-07-22 18:39:47.201361] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:35.240 [2024-07-22 18:39:47.201428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.240 [2024-07-22 18:39:47.201448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:35.240 [2024-07-22 18:39:47.208006] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:35.240 [2024-07-22 18:39:47.208071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.240 [2024-07-22 18:39:47.208091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:35.240 [2024-07-22 18:39:47.212416] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:35.240 [2024-07-22 18:39:47.212467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.240 [2024-07-22 18:39:47.212486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.240 [2024-07-22 18:39:47.218036] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:35.240 [2024-07-22 18:39:47.218084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.240 [2024-07-22 18:39:47.218104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:35.240 [2024-07-22 18:39:47.224326] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:35.240 [2024-07-22 18:39:47.224375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.240 [2024-07-22 18:39:47.224395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:35.240 [2024-07-22 18:39:47.228597] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:35.240 [2024-07-22 18:39:47.228644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.240 [2024-07-22 18:39:47.228663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:35.240 [2024-07-22 18:39:47.234118] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:35.240 [2024-07-22 18:39:47.234167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.240 [2024-07-22 18:39:47.234186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.240 [2024-07-22 18:39:47.240447] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:35.240 [2024-07-22 18:39:47.240495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.240 [2024-07-22 18:39:47.240515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:35.240 [2024-07-22 18:39:47.246956] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:35.240 [2024-07-22 18:39:47.247006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.240 [2024-07-22 18:39:47.247026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:35.240 [2024-07-22 18:39:47.251346] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:35.240 [2024-07-22 18:39:47.251403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.240 [2024-07-22 18:39:47.251434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:35.502 [2024-07-22 18:39:47.256632] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:35.502 [2024-07-22 18:39:47.256682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.502 [2024-07-22 18:39:47.256701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.502 [2024-07-22 18:39:47.262294] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:35.502 [2024-07-22 18:39:47.262344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.502 [2024-07-22 18:39:47.262364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:35.502 [2024-07-22 18:39:47.266833] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:35.502 [2024-07-22 18:39:47.266909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.502 [2024-07-22 18:39:47.266929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:35.502 [2024-07-22 18:39:47.272162] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:35.502 [2024-07-22 18:39:47.272224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.503 [2024-07-22 18:39:47.272243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:35.503 [2024-07-22 18:39:47.276856] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:35.503 [2024-07-22 18:39:47.276919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.503 [2024-07-22 18:39:47.276939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.503 [2024-07-22 18:39:47.281198] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:35.503 [2024-07-22 18:39:47.281262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.503 [2024-07-22 18:39:47.281281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:35.503 [2024-07-22 18:39:47.285860] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:35.503 [2024-07-22 18:39:47.285923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.503 [2024-07-22 18:39:47.285942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:35.503 [2024-07-22 18:39:47.291782] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:35.503 [2024-07-22 18:39:47.291866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.503 [2024-07-22 18:39:47.291889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:35.503 [2024-07-22 18:39:47.298107] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:35.503 [2024-07-22 18:39:47.298155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.503 [2024-07-22 18:39:47.298175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.503 [2024-07-22 18:39:47.302511] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:35.503 [2024-07-22 18:39:47.302559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.503 [2024-07-22 18:39:47.302579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:35.503 [2024-07-22 18:39:47.307490] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:35.503 [2024-07-22 18:39:47.307554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.503 [2024-07-22 18:39:47.307574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:35.503 [2024-07-22 18:39:47.313732] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:35.503 [2024-07-22 18:39:47.313781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.503 [2024-07-22 18:39:47.313800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:35.503 [2024-07-22 18:39:47.319729] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:35.503 [2024-07-22 18:39:47.319798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.503 [2024-07-22 18:39:47.319817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.503 [2024-07-22 18:39:47.324219] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:35.503 [2024-07-22 18:39:47.324268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.503 [2024-07-22 18:39:47.324286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:35.503 [2024-07-22 18:39:47.329475] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:35.503 [2024-07-22 18:39:47.329539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.503 [2024-07-22 18:39:47.329558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:35.503 [2024-07-22 18:39:47.335510] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:35.503 [2024-07-22 18:39:47.335574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.503 [2024-07-22 18:39:47.335593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:35.503 [2024-07-22 18:39:47.340047] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:35.503 [2024-07-22 18:39:47.340112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.503 [2024-07-22 18:39:47.340131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.503 [2024-07-22 18:39:47.345519] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:35.503 [2024-07-22 18:39:47.345583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.503 [2024-07-22 18:39:47.345621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:35.503 [2024-07-22 18:39:47.352082] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:35.503 [2024-07-22 18:39:47.352145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.503 [2024-07-22 18:39:47.352165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:35.503 [2024-07-22 18:39:47.358167] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:35.503 [2024-07-22 18:39:47.358217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.503 [2024-07-22 18:39:47.358237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:35.503 [2024-07-22 18:39:47.362404] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:35.503 [2024-07-22 18:39:47.362465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.503 [2024-07-22 18:39:47.362483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.503 [2024-07-22 18:39:47.368532] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:35.503 [2024-07-22 18:39:47.368596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.503 [2024-07-22 18:39:47.368615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:35.503 [2024-07-22 18:39:47.373579] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:35.503 [2024-07-22 18:39:47.373642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.503 [2024-07-22 18:39:47.373660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:35.503 [2024-07-22 18:39:47.378143] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:35.503 [2024-07-22 18:39:47.378191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.503 [2024-07-22 18:39:47.378211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:35.503 [2024-07-22 18:39:47.383299] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:35.503 [2024-07-22 18:39:47.383362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.503 [2024-07-22 18:39:47.383381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.503 [2024-07-22 18:39:47.388371] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:35.503 [2024-07-22 18:39:47.388433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.503 [2024-07-22 18:39:47.388452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:35.503 [2024-07-22 18:39:47.392983] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:35.503 [2024-07-22 18:39:47.393043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.503 [2024-07-22 18:39:47.393062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:35.503 [2024-07-22 18:39:47.398076] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:35.503 [2024-07-22 18:39:47.398124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.503 [2024-07-22 18:39:47.398143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:35.503 [2024-07-22 18:39:47.403079] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:35.503 [2024-07-22 18:39:47.403140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.503 [2024-07-22 18:39:47.403159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.503 [2024-07-22 18:39:47.407370] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:35.503 [2024-07-22 18:39:47.407432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.503 [2024-07-22 18:39:47.407451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:35.504 [2024-07-22 18:39:47.413007] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:35.504 [2024-07-22 18:39:47.413070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.504 [2024-07-22 18:39:47.413090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:35.504 [2024-07-22 18:39:47.417049] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:35.504 [2024-07-22 18:39:47.417111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.504 [2024-07-22 18:39:47.417130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:35.504 [2024-07-22 18:39:47.422563] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:35.504 [2024-07-22 18:39:47.422627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.504 [2024-07-22 18:39:47.422646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.504 [2024-07-22 18:39:47.428354] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:35.504 [2024-07-22 18:39:47.428416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.504 [2024-07-22 18:39:47.428435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:35.504 [2024-07-22 18:39:47.432269] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:35.504 [2024-07-22 18:39:47.432329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.504 [2024-07-22 18:39:47.432347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:35.504 [2024-07-22 18:39:47.438429] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:35.504 [2024-07-22 18:39:47.438491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.504 [2024-07-22 18:39:47.438509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:35.504 [2024-07-22 18:39:47.442396] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:35.504 [2024-07-22 18:39:47.442459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.504 [2024-07-22 18:39:47.442477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.504 [2024-07-22 18:39:47.447323] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:35.504 [2024-07-22 18:39:47.447385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.504 [2024-07-22 18:39:47.447404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:35.504 [2024-07-22 18:39:47.453635] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:35.504 [2024-07-22 18:39:47.453712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.504 [2024-07-22 18:39:47.453730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:35.504 [2024-07-22 18:39:47.457790] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:35.504 [2024-07-22 18:39:47.457860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.504 [2024-07-22 18:39:47.457880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:35.504 [2024-07-22 18:39:47.462987] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:35.504 [2024-07-22 18:39:47.463050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.504 [2024-07-22 18:39:47.463069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.504 [2024-07-22 18:39:47.468649] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:35.504 [2024-07-22 18:39:47.468713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.504 [2024-07-22 18:39:47.468732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:35.504 [2024-07-22 18:39:47.473475] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:35.504 [2024-07-22 18:39:47.473538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.504 [2024-07-22 18:39:47.473558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:35.504 [2024-07-22 18:39:47.478458] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:35.504 [2024-07-22 18:39:47.478522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.504 [2024-07-22 18:39:47.478541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:35.504 [2024-07-22 18:39:47.484055] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:35.504 [2024-07-22 18:39:47.484118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.504 [2024-07-22 18:39:47.484138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.504 [2024-07-22 18:39:47.488594] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:35.504 [2024-07-22 18:39:47.488657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.504 [2024-07-22 18:39:47.488676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:35.504 [2024-07-22 18:39:47.493647] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:35.504 [2024-07-22 18:39:47.493709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.504 [2024-07-22 18:39:47.493728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:35.504 [2024-07-22 18:39:47.498970] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:35.504 [2024-07-22 18:39:47.499047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.504 [2024-07-22 18:39:47.499066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:35.504 [2024-07-22 18:39:47.503986] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:35.504 [2024-07-22 18:39:47.504033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.504 [2024-07-22 18:39:47.504052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.504 [2024-07-22 18:39:47.509142] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:35.504 [2024-07-22 18:39:47.509186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.504 [2024-07-22 18:39:47.509205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:35.504 [2024-07-22 18:39:47.513981] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:35.504 [2024-07-22 18:39:47.514035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.504 [2024-07-22 18:39:47.514054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:35.783 [2024-07-22 18:39:47.518636] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:35.783 [2024-07-22 18:39:47.518682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.783 [2024-07-22 18:39:47.518701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:35.783 [2024-07-22 18:39:47.524372] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:35.783 [2024-07-22 18:39:47.524421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.783 [2024-07-22 18:39:47.524441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.783 [2024-07-22 18:39:47.530433] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:35.783 [2024-07-22 18:39:47.530480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.783 [2024-07-22 18:39:47.530497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:35.783 [2024-07-22 18:39:47.534306] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:35.783 [2024-07-22 18:39:47.534368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.783 [2024-07-22 18:39:47.534387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:35.783 [2024-07-22 18:39:47.540589] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:35.783 [2024-07-22 18:39:47.540638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.783 [2024-07-22 18:39:47.540671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:35.783 [2024-07-22 18:39:47.547080] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:35.783 [2024-07-22 18:39:47.547143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.783 [2024-07-22 18:39:47.547161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.783 [2024-07-22 18:39:47.553737] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:35.783 [2024-07-22 18:39:47.553786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.783 [2024-07-22 18:39:47.553805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:35.783 [2024-07-22 18:39:47.558326] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:35.783 [2024-07-22 18:39:47.558375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.784 [2024-07-22 18:39:47.558394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:35.784 [2024-07-22 18:39:47.563908] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:35.784 [2024-07-22 18:39:47.563953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.784 [2024-07-22 18:39:47.563971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:35.784 [2024-07-22 18:39:47.570360] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:35.784 [2024-07-22 18:39:47.570424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.784 [2024-07-22 18:39:47.570444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.784 [2024-07-22 18:39:47.576341] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:35.784 [2024-07-22 18:39:47.576388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.784 [2024-07-22 18:39:47.576407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:35.784 [2024-07-22 18:39:47.580120] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:35.784 [2024-07-22 18:39:47.580166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.784 [2024-07-22 18:39:47.580183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:35.784 [2024-07-22 18:39:47.586159] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:35.784 [2024-07-22 18:39:47.586207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.784 [2024-07-22 18:39:47.586226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:35.784 [2024-07-22 18:39:47.592293] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:35.784 [2024-07-22 18:39:47.592341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.784 [2024-07-22 18:39:47.592358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.784 [2024-07-22 18:39:47.596542] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:35.784 [2024-07-22 18:39:47.596590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.784 [2024-07-22 18:39:47.596608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:35.784 [2024-07-22 18:39:47.602203] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:35.784 [2024-07-22 18:39:47.602251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.784 [2024-07-22 18:39:47.602270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:35.784 [2024-07-22 18:39:47.608692] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:35.784 [2024-07-22 18:39:47.608756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.784 [2024-07-22 18:39:47.608775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:35.784 [2024-07-22 18:39:47.614861] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:35.784 [2024-07-22 18:39:47.614921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.784 [2024-07-22 18:39:47.614941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.784 [2024-07-22 18:39:47.618954] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:35.784 [2024-07-22 18:39:47.619000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.784 [2024-07-22 18:39:47.619019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:35.784 [2024-07-22 18:39:47.625218] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:35.784 [2024-07-22 18:39:47.625282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.784 [2024-07-22 18:39:47.625302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:35.784 [2024-07-22 18:39:47.631600] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:35.784 [2024-07-22 18:39:47.631679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.784 [2024-07-22 18:39:47.631706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:35.784 [2024-07-22 18:39:47.636827] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:35.784 [2024-07-22 18:39:47.636919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.784 [2024-07-22 18:39:47.636939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.784 [2024-07-22 18:39:47.640819] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:35.784 [2024-07-22 18:39:47.640909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.784 [2024-07-22 18:39:47.640928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:35.784 [2024-07-22 18:39:47.645956] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:35.784 [2024-07-22 18:39:47.646028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.784 [2024-07-22 18:39:47.646049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:35.784 [2024-07-22 18:39:47.650826] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:35.784 [2024-07-22 18:39:47.650902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.784 [2024-07-22 18:39:47.650921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:35.784 [2024-07-22 18:39:47.655936] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:35.784 [2024-07-22 18:39:47.655982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.784 [2024-07-22 18:39:47.656000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.784 [2024-07-22 18:39:47.660704] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:35.784 [2024-07-22 18:39:47.660752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.784 [2024-07-22 18:39:47.660770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:35.784 [2024-07-22 18:39:47.666079] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:35.784 [2024-07-22 18:39:47.666128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.784 [2024-07-22 18:39:47.666147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:35.784 [2024-07-22 18:39:47.670156] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:35.784 [2024-07-22 18:39:47.670205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.784 [2024-07-22 18:39:47.670224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:35.784 [2024-07-22 18:39:47.675561] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:35.784 [2024-07-22 18:39:47.675612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.784 [2024-07-22 18:39:47.675631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.784 [2024-07-22 18:39:47.680554] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:35.784 [2024-07-22 18:39:47.680616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.784 [2024-07-22 18:39:47.680635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:35.784 [2024-07-22 18:39:47.685676] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:35.784 [2024-07-22 18:39:47.685723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.784 [2024-07-22 18:39:47.685742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:35.784 [2024-07-22 18:39:47.690095] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:35.784 [2024-07-22 18:39:47.690144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.784 [2024-07-22 18:39:47.690163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:35.784 [2024-07-22 18:39:47.696107] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:35.784 [2024-07-22 18:39:47.696155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.784 [2024-07-22 18:39:47.696173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.784 [2024-07-22 18:39:47.702722] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:35.785 [2024-07-22 18:39:47.702771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.785 [2024-07-22 18:39:47.702790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:35.785 [2024-07-22 18:39:47.708957] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:35.785 [2024-07-22 18:39:47.709021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.785 [2024-07-22 18:39:47.709041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:35.785 [2024-07-22 18:39:47.712853] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:35.785 [2024-07-22 18:39:47.712926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.785 [2024-07-22 18:39:47.712945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:35.785 [2024-07-22 18:39:47.719106] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:35.785 [2024-07-22 18:39:47.719170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.785 [2024-07-22 18:39:47.719190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.785 [2024-07-22 18:39:47.725562] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:35.785 [2024-07-22 18:39:47.725641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.785 [2024-07-22 18:39:47.725660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:35.785 [2024-07-22 18:39:47.732014] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:35.785 [2024-07-22 18:39:47.732076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.785 [2024-07-22 18:39:47.732095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:35.785 [2024-07-22 18:39:47.736157] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:35.785 [2024-07-22 18:39:47.736220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.785 [2024-07-22 18:39:47.736238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:35.785 [2024-07-22 18:39:47.741861] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:35.785 [2024-07-22 18:39:47.741921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.785 [2024-07-22 18:39:47.741941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.785 [2024-07-22 18:39:47.746183] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:35.785 [2024-07-22 18:39:47.746229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.785 [2024-07-22 18:39:47.746248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:35.785 [2024-07-22 18:39:47.751377] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:35.785 [2024-07-22 18:39:47.751425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.785 [2024-07-22 18:39:47.751444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:35.785 [2024-07-22 18:39:47.757038] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:35.785 [2024-07-22 18:39:47.757086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.785 [2024-07-22 18:39:47.757105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:35.785 [2024-07-22 18:39:47.762702] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:35.785 [2024-07-22 18:39:47.762766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.785 [2024-07-22 18:39:47.762786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.785 [2024-07-22 18:39:47.766709] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:35.785 [2024-07-22 18:39:47.766771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.785 [2024-07-22 18:39:47.766790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:35.785 [2024-07-22 18:39:47.772725] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:35.785 [2024-07-22 18:39:47.772789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.785 [2024-07-22 18:39:47.772809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:35.785 [2024-07-22 18:39:47.778945] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:35.785 [2024-07-22 18:39:47.779006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.785 [2024-07-22 18:39:47.779024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:35.785 [2024-07-22 18:39:47.783071] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:35.785 [2024-07-22 18:39:47.783119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.785 [2024-07-22 18:39:47.783138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.785 [2024-07-22 18:39:47.788561] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:35.785 [2024-07-22 18:39:47.788610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.785 [2024-07-22 18:39:47.788628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:35.785 [2024-07-22 18:39:47.794532] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:35.785 [2024-07-22 18:39:47.794581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.785 [2024-07-22 18:39:47.794601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:35.785 [2024-07-22 18:39:47.798502] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:35.785 [2024-07-22 18:39:47.798550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.785 [2024-07-22 18:39:47.798569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:36.045 [2024-07-22 18:39:47.804587] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.045 [2024-07-22 18:39:47.804663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.045 [2024-07-22 18:39:47.804681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:36.045 [2024-07-22 18:39:47.809488] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.045 [2024-07-22 18:39:47.809536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.045 [2024-07-22 18:39:47.809555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:36.045 [2024-07-22 18:39:47.814631] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.045 [2024-07-22 18:39:47.814687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.045 [2024-07-22 18:39:47.814705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:36.045 [2024-07-22 18:39:47.818882] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.045 [2024-07-22 18:39:47.818927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.045 [2024-07-22 18:39:47.818946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:36.045 [2024-07-22 18:39:47.824643] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.045 [2024-07-22 18:39:47.824689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.045 [2024-07-22 18:39:47.824707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:36.045 [2024-07-22 18:39:47.830635] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.045 [2024-07-22 18:39:47.830683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.045 [2024-07-22 18:39:47.830703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:36.045 [2024-07-22 18:39:47.834789] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.045 [2024-07-22 18:39:47.834868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.045 [2024-07-22 18:39:47.834898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:36.045 [2024-07-22 18:39:47.840165] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.045 [2024-07-22 18:39:47.840213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.045 [2024-07-22 18:39:47.840231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:36.045 [2024-07-22 18:39:47.846104] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.045 [2024-07-22 18:39:47.846152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.045 [2024-07-22 18:39:47.846171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:36.045 [2024-07-22 18:39:47.850053] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.045 [2024-07-22 18:39:47.850098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.045 [2024-07-22 18:39:47.850117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:36.045 [2024-07-22 18:39:47.856111] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.045 [2024-07-22 18:39:47.856159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.045 [2024-07-22 18:39:47.856178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:36.046 [2024-07-22 18:39:47.862094] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.046 [2024-07-22 18:39:47.862143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.046 [2024-07-22 18:39:47.862162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:36.046 [2024-07-22 18:39:47.868180] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.046 [2024-07-22 18:39:47.868228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.046 [2024-07-22 18:39:47.868246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:36.046 [2024-07-22 18:39:47.874100] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.046 [2024-07-22 18:39:47.874148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.046 [2024-07-22 18:39:47.874167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:36.046 [2024-07-22 18:39:47.879990] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.046 [2024-07-22 18:39:47.880038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.046 [2024-07-22 18:39:47.880058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:36.046 [2024-07-22 18:39:47.886043] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.046 [2024-07-22 18:39:47.886090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.046 [2024-07-22 18:39:47.886109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:36.046 [2024-07-22 18:39:47.891965] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.046 [2024-07-22 18:39:47.892010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.046 [2024-07-22 18:39:47.892029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:36.046 [2024-07-22 18:39:47.897483] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.046 [2024-07-22 18:39:47.897547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.046 [2024-07-22 18:39:47.897565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:36.046 [2024-07-22 18:39:47.903421] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.046 [2024-07-22 18:39:47.903486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.046 [2024-07-22 18:39:47.903506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:36.046 [2024-07-22 18:39:47.909552] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.046 [2024-07-22 18:39:47.909600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.046 [2024-07-22 18:39:47.909619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:36.046 [2024-07-22 18:39:47.913012] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.046 [2024-07-22 18:39:47.913059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.046 [2024-07-22 18:39:47.913077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:36.046 [2024-07-22 18:39:47.919144] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.046 [2024-07-22 18:39:47.919193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.046 [2024-07-22 18:39:47.919212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:36.046 [2024-07-22 18:39:47.923245] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.046 [2024-07-22 18:39:47.923293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.046 [2024-07-22 18:39:47.923312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:36.046 [2024-07-22 18:39:47.928371] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.046 [2024-07-22 18:39:47.928421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.046 [2024-07-22 18:39:47.928441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:36.046 [2024-07-22 18:39:47.932885] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.046 [2024-07-22 18:39:47.932935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.046 [2024-07-22 18:39:47.932955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:36.046 [2024-07-22 18:39:47.938431] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.046 [2024-07-22 18:39:47.938481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.046 [2024-07-22 18:39:47.938502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:36.046 [2024-07-22 18:39:47.945084] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.046 [2024-07-22 18:39:47.945134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.046 [2024-07-22 18:39:47.945153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:36.046 [2024-07-22 18:39:47.951633] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.046 [2024-07-22 18:39:47.951684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.046 [2024-07-22 18:39:47.951704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:36.046 [2024-07-22 18:39:47.956154] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.046 [2024-07-22 18:39:47.956202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.046 [2024-07-22 18:39:47.956221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:36.046 [2024-07-22 18:39:47.961689] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.046 [2024-07-22 18:39:47.961740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.046 [2024-07-22 18:39:47.961759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:36.046 [2024-07-22 18:39:47.968080] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.046 [2024-07-22 18:39:47.968128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.046 [2024-07-22 18:39:47.968148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:36.046 [2024-07-22 18:39:47.974312] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.046 [2024-07-22 18:39:47.974360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.046 [2024-07-22 18:39:47.974379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:36.046 [2024-07-22 18:39:47.979598] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.046 [2024-07-22 18:39:47.979646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.046 [2024-07-22 18:39:47.979665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:36.046 [2024-07-22 18:39:47.984692] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.046 [2024-07-22 18:39:47.984739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.046 [2024-07-22 18:39:47.984758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:36.046 [2024-07-22 18:39:47.988377] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.046 [2024-07-22 18:39:47.988425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.046 [2024-07-22 18:39:47.988444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:36.046 [2024-07-22 18:39:47.994250] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.046 [2024-07-22 18:39:47.994299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.047 [2024-07-22 18:39:47.994318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:36.047 [2024-07-22 18:39:47.999874] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.047 [2024-07-22 18:39:47.999922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.047 [2024-07-22 18:39:47.999942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:36.047 [2024-07-22 18:39:48.004360] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.047 [2024-07-22 18:39:48.004408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.047 [2024-07-22 18:39:48.004428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:36.047 [2024-07-22 18:39:48.010207] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.047 [2024-07-22 18:39:48.010255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.047 [2024-07-22 18:39:48.010275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:36.047 [2024-07-22 18:39:48.016619] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.047 [2024-07-22 18:39:48.016669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.047 [2024-07-22 18:39:48.016688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:36.047 [2024-07-22 18:39:48.022646] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.047 [2024-07-22 18:39:48.022696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.047 [2024-07-22 18:39:48.022716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:36.047 [2024-07-22 18:39:48.026554] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.047 [2024-07-22 18:39:48.026602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.047 [2024-07-22 18:39:48.026640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:36.047 [2024-07-22 18:39:48.032818] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.047 [2024-07-22 18:39:48.032885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.047 [2024-07-22 18:39:48.032905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:36.047 [2024-07-22 18:39:48.038971] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.047 [2024-07-22 18:39:48.039019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.047 [2024-07-22 18:39:48.039039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:36.047 [2024-07-22 18:39:48.044879] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.047 [2024-07-22 18:39:48.044926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.047 [2024-07-22 18:39:48.044945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:36.047 [2024-07-22 18:39:48.050133] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.047 [2024-07-22 18:39:48.050179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.047 [2024-07-22 18:39:48.050198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:36.047 [2024-07-22 18:39:48.055754] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.047 [2024-07-22 18:39:48.055803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.047 [2024-07-22 18:39:48.055822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:36.307 [2024-07-22 18:39:48.061138] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.307 [2024-07-22 18:39:48.061187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.307 [2024-07-22 18:39:48.061207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:36.307 [2024-07-22 18:39:48.065957] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.307 [2024-07-22 18:39:48.066006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.307 [2024-07-22 18:39:48.066042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:36.307 [2024-07-22 18:39:48.071002] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.307 [2024-07-22 18:39:48.071050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.307 [2024-07-22 18:39:48.071070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:36.307 [2024-07-22 18:39:48.076333] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.307 [2024-07-22 18:39:48.076383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.307 [2024-07-22 18:39:48.076402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:36.307 [2024-07-22 18:39:48.081160] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.307 [2024-07-22 18:39:48.081208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.307 [2024-07-22 18:39:48.081228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:36.307 [2024-07-22 18:39:48.085907] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.307 [2024-07-22 18:39:48.085954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.307 [2024-07-22 18:39:48.085973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:36.307 [2024-07-22 18:39:48.091743] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.307 [2024-07-22 18:39:48.091791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.307 [2024-07-22 18:39:48.091811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:36.307 [2024-07-22 18:39:48.096128] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.307 [2024-07-22 18:39:48.096176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.307 [2024-07-22 18:39:48.096195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:36.307 [2024-07-22 18:39:48.101690] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.307 [2024-07-22 18:39:48.101738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.307 [2024-07-22 18:39:48.101758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:36.307 [2024-07-22 18:39:48.105667] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.307 [2024-07-22 18:39:48.105714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.307 [2024-07-22 18:39:48.105733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:36.307 [2024-07-22 18:39:48.111402] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.307 [2024-07-22 18:39:48.111453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.307 [2024-07-22 18:39:48.111472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:36.308 [2024-07-22 18:39:48.117701] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.308 [2024-07-22 18:39:48.117750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.308 [2024-07-22 18:39:48.117771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:36.308 [2024-07-22 18:39:48.121729] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.308 [2024-07-22 18:39:48.121775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.308 [2024-07-22 18:39:48.121794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:36.308 [2024-07-22 18:39:48.127452] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.308 [2024-07-22 18:39:48.127500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.308 [2024-07-22 18:39:48.127520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:36.308 [2024-07-22 18:39:48.132568] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.308 [2024-07-22 18:39:48.132616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.308 [2024-07-22 18:39:48.132635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:36.308 [2024-07-22 18:39:48.136340] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.308 [2024-07-22 18:39:48.136388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.308 [2024-07-22 18:39:48.136407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:36.308 [2024-07-22 18:39:48.141717] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.308 [2024-07-22 18:39:48.141766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.308 [2024-07-22 18:39:48.141786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:36.308 [2024-07-22 18:39:48.145903] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.308 [2024-07-22 18:39:48.145947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.308 [2024-07-22 18:39:48.145966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:36.308 [2024-07-22 18:39:48.151178] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.308 [2024-07-22 18:39:48.151225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.308 [2024-07-22 18:39:48.151244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:36.308 [2024-07-22 18:39:48.157224] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.308 [2024-07-22 18:39:48.157279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.308 [2024-07-22 18:39:48.157299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:36.308 [2024-07-22 18:39:48.162817] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.308 [2024-07-22 18:39:48.162879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.308 [2024-07-22 18:39:48.162899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:36.308 [2024-07-22 18:39:48.168754] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.308 [2024-07-22 18:39:48.168803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.308 [2024-07-22 18:39:48.168822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:36.308 [2024-07-22 18:39:48.172747] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.308 [2024-07-22 18:39:48.172795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.308 [2024-07-22 18:39:48.172814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:36.308 [2024-07-22 18:39:48.178825] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.308 [2024-07-22 18:39:48.178886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.308 [2024-07-22 18:39:48.178905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:36.308 [2024-07-22 18:39:48.185250] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.308 [2024-07-22 18:39:48.185299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.308 [2024-07-22 18:39:48.185318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:36.308 [2024-07-22 18:39:48.191384] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.308 [2024-07-22 18:39:48.191433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.308 [2024-07-22 18:39:48.191453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:36.308 [2024-07-22 18:39:48.195304] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.308 [2024-07-22 18:39:48.195353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.308 [2024-07-22 18:39:48.195372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:36.308 [2024-07-22 18:39:48.200338] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.308 [2024-07-22 18:39:48.200394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.308 [2024-07-22 18:39:48.200414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:36.308 [2024-07-22 18:39:48.204850] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.308 [2024-07-22 18:39:48.204896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.308 [2024-07-22 18:39:48.204915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:36.308 [2024-07-22 18:39:48.210336] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.308 [2024-07-22 18:39:48.210385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.308 [2024-07-22 18:39:48.210404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:36.308 [2024-07-22 18:39:48.215209] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.308 [2024-07-22 18:39:48.215261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.308 [2024-07-22 18:39:48.215280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:36.308 [2024-07-22 18:39:48.220491] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.308 [2024-07-22 18:39:48.220544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.308 [2024-07-22 18:39:48.220564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:36.308 [2024-07-22 18:39:48.225904] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.308 [2024-07-22 18:39:48.225954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.308 [2024-07-22 18:39:48.225974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:36.308 [2024-07-22 18:39:48.231063] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.308 [2024-07-22 18:39:48.231116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.308 [2024-07-22 18:39:48.231137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:36.308 [2024-07-22 18:39:48.236128] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.308 [2024-07-22 18:39:48.236209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.308 [2024-07-22 18:39:48.236230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:36.308 [2024-07-22 18:39:48.241407] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.308 [2024-07-22 18:39:48.241456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.308 [2024-07-22 18:39:48.241476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:36.308 [2024-07-22 18:39:48.246880] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.308 [2024-07-22 18:39:48.246959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.308 [2024-07-22 18:39:48.246979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:36.308 [2024-07-22 18:39:48.251563] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.308 [2024-07-22 18:39:48.251647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.309 [2024-07-22 18:39:48.251666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:36.309 [2024-07-22 18:39:48.257274] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.309 [2024-07-22 18:39:48.257337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.309 [2024-07-22 18:39:48.257356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:36.309 [2024-07-22 18:39:48.261340] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.309 [2024-07-22 18:39:48.261413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.309 [2024-07-22 18:39:48.261433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:36.309 [2024-07-22 18:39:48.266910] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.309 [2024-07-22 18:39:48.266996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.309 [2024-07-22 18:39:48.267019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:36.309 [2024-07-22 18:39:48.273222] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.309 [2024-07-22 18:39:48.273318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.309 [2024-07-22 18:39:48.273340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:36.309 [2024-07-22 18:39:48.279122] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.309 [2024-07-22 18:39:48.279201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.309 [2024-07-22 18:39:48.279221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:36.309 [2024-07-22 18:39:48.285565] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.309 [2024-07-22 18:39:48.285651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.309 [2024-07-22 18:39:48.285672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:36.309 [2024-07-22 18:39:48.292111] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.309 [2024-07-22 18:39:48.292209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.309 [2024-07-22 18:39:48.292232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:36.309 [2024-07-22 18:39:48.298488] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.309 [2024-07-22 18:39:48.298615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.309 [2024-07-22 18:39:48.298637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:36.309 [2024-07-22 18:39:48.302887] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.309 [2024-07-22 18:39:48.302954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.309 [2024-07-22 18:39:48.302975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:36.309 [2024-07-22 18:39:48.308807] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.309 [2024-07-22 18:39:48.308908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.309 [2024-07-22 18:39:48.308929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:36.309 [2024-07-22 18:39:48.313268] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.309 [2024-07-22 18:39:48.313319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.309 [2024-07-22 18:39:48.313339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:36.309 [2024-07-22 18:39:48.319057] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.309 [2024-07-22 18:39:48.319130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.309 [2024-07-22 18:39:48.319150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:36.569 [2024-07-22 18:39:48.323855] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.569 [2024-07-22 18:39:48.323920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.569 [2024-07-22 18:39:48.323939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:36.569 [2024-07-22 18:39:48.328563] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.569 [2024-07-22 18:39:48.328632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.569 [2024-07-22 18:39:48.328650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:36.569 [2024-07-22 18:39:48.333698] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.569 [2024-07-22 18:39:48.333767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.569 [2024-07-22 18:39:48.333786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:36.569 [2024-07-22 18:39:48.338609] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.569 [2024-07-22 18:39:48.338671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.569 [2024-07-22 18:39:48.338691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:36.569 [2024-07-22 18:39:48.343993] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.569 [2024-07-22 18:39:48.344056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.569 [2024-07-22 18:39:48.344076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:36.569 [2024-07-22 18:39:48.348634] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.569 [2024-07-22 18:39:48.348716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.569 [2024-07-22 18:39:48.348735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:36.569 [2024-07-22 18:39:48.353390] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.569 [2024-07-22 18:39:48.353441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.569 [2024-07-22 18:39:48.353460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:36.569 [2024-07-22 18:39:48.358839] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.569 [2024-07-22 18:39:48.358930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.569 [2024-07-22 18:39:48.358951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:36.569 [2024-07-22 18:39:48.364181] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.569 [2024-07-22 18:39:48.364269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.570 [2024-07-22 18:39:48.364289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:36.570 [2024-07-22 18:39:48.369302] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.570 [2024-07-22 18:39:48.369363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.570 [2024-07-22 18:39:48.369385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:36.570 [2024-07-22 18:39:48.374350] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.570 [2024-07-22 18:39:48.374399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.570 [2024-07-22 18:39:48.374419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:36.570 [2024-07-22 18:39:48.380052] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.570 [2024-07-22 18:39:48.380132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.570 [2024-07-22 18:39:48.380152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:36.570 [2024-07-22 18:39:48.387118] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.570 [2024-07-22 18:39:48.387224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.570 [2024-07-22 18:39:48.387261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:36.570 [2024-07-22 18:39:48.391772] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.570 [2024-07-22 18:39:48.391866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.570 [2024-07-22 18:39:48.391889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:36.570 [2024-07-22 18:39:48.398265] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.570 [2024-07-22 18:39:48.398324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.570 [2024-07-22 18:39:48.398345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:36.570 [2024-07-22 18:39:48.405230] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.570 [2024-07-22 18:39:48.405321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.570 [2024-07-22 18:39:48.405342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:36.570 [2024-07-22 18:39:48.411939] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.570 [2024-07-22 18:39:48.412019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.570 [2024-07-22 18:39:48.412040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:36.570 [2024-07-22 18:39:48.416126] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.570 [2024-07-22 18:39:48.416191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.570 [2024-07-22 18:39:48.416211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:36.570 [2024-07-22 18:39:48.422992] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.570 [2024-07-22 18:39:48.423104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.570 [2024-07-22 18:39:48.423128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:36.570 [2024-07-22 18:39:48.429740] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.570 [2024-07-22 18:39:48.429865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.570 [2024-07-22 18:39:48.429889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:36.570 [2024-07-22 18:39:48.434431] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.570 [2024-07-22 18:39:48.434518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.570 [2024-07-22 18:39:48.434539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:36.570 [2024-07-22 18:39:48.440226] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.570 [2024-07-22 18:39:48.440347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.570 [2024-07-22 18:39:48.440370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:36.570 [2024-07-22 18:39:48.446912] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.570 [2024-07-22 18:39:48.447003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.570 [2024-07-22 18:39:48.447024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:36.570 [2024-07-22 18:39:48.451578] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.570 [2024-07-22 18:39:48.451650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.570 [2024-07-22 18:39:48.451669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:36.570 [2024-07-22 18:39:48.457383] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.570 [2024-07-22 18:39:48.457460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.570 [2024-07-22 18:39:48.457481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:36.570 [2024-07-22 18:39:48.463923] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.570 [2024-07-22 18:39:48.464010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.570 [2024-07-22 18:39:48.464031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:36.570 [2024-07-22 18:39:48.469984] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.570 [2024-07-22 18:39:48.470121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.570 [2024-07-22 18:39:48.470143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:36.570 [2024-07-22 18:39:48.473750] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.570 [2024-07-22 18:39:48.473812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.570 [2024-07-22 18:39:48.473831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:36.570 [2024-07-22 18:39:48.479781] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.570 [2024-07-22 18:39:48.479866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.570 [2024-07-22 18:39:48.479887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:36.570 [2024-07-22 18:39:48.486122] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.570 [2024-07-22 18:39:48.486186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.570 [2024-07-22 18:39:48.486205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:36.570 [2024-07-22 18:39:48.492223] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.570 [2024-07-22 18:39:48.492285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.570 [2024-07-22 18:39:48.492304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:36.570 [2024-07-22 18:39:48.497861] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.570 [2024-07-22 18:39:48.497924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.570 [2024-07-22 18:39:48.497942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:36.570 [2024-07-22 18:39:48.503758] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.570 [2024-07-22 18:39:48.503835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.570 [2024-07-22 18:39:48.503869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:36.570 [2024-07-22 18:39:48.510319] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.570 [2024-07-22 18:39:48.510438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.570 [2024-07-22 18:39:48.510459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:36.570 [2024-07-22 18:39:48.516441] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.570 [2024-07-22 18:39:48.516520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.570 [2024-07-22 18:39:48.516541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:36.570 [2024-07-22 18:39:48.520808] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.571 [2024-07-22 18:39:48.520921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.571 [2024-07-22 18:39:48.520944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:36.571 [2024-07-22 18:39:48.527636] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.571 [2024-07-22 18:39:48.527733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.571 [2024-07-22 18:39:48.527757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:36.571 [2024-07-22 18:39:48.532331] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.571 [2024-07-22 18:39:48.532401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.571 [2024-07-22 18:39:48.532421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:36.571 [2024-07-22 18:39:48.537845] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.571 [2024-07-22 18:39:48.537923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.571 [2024-07-22 18:39:48.537944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:36.571 [2024-07-22 18:39:48.544461] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.571 [2024-07-22 18:39:48.544549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.571 [2024-07-22 18:39:48.544571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:36.571 [2024-07-22 18:39:48.549158] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.571 [2024-07-22 18:39:48.549243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.571 [2024-07-22 18:39:48.549264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:36.571 [2024-07-22 18:39:48.554103] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.571 [2024-07-22 18:39:48.554204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.571 [2024-07-22 18:39:48.554225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:36.571 [2024-07-22 18:39:48.559126] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.571 [2024-07-22 18:39:48.559200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.571 [2024-07-22 18:39:48.559221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:36.571 [2024-07-22 18:39:48.564625] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.571 [2024-07-22 18:39:48.564703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.571 [2024-07-22 18:39:48.564722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:36.571 [2024-07-22 18:39:48.569402] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.571 [2024-07-22 18:39:48.569467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.571 [2024-07-22 18:39:48.569487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:36.571 [2024-07-22 18:39:48.575091] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.571 [2024-07-22 18:39:48.575158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.571 [2024-07-22 18:39:48.575178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:36.571 [2024-07-22 18:39:48.579499] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.571 [2024-07-22 18:39:48.579569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.571 [2024-07-22 18:39:48.579589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:36.831 [2024-07-22 18:39:48.584762] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.831 [2024-07-22 18:39:48.584836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.831 [2024-07-22 18:39:48.584868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:36.831 [2024-07-22 18:39:48.589954] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.831 [2024-07-22 18:39:48.590061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.831 [2024-07-22 18:39:48.590083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:36.831 [2024-07-22 18:39:48.595359] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.831 [2024-07-22 18:39:48.595413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.831 [2024-07-22 18:39:48.595435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:36.831 [2024-07-22 18:39:48.600171] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.831 [2024-07-22 18:39:48.600233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.831 [2024-07-22 18:39:48.600252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:36.831 [2024-07-22 18:39:48.604968] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.831 [2024-07-22 18:39:48.605036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.831 [2024-07-22 18:39:48.605056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:36.831 [2024-07-22 18:39:48.610301] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.831 [2024-07-22 18:39:48.610386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.831 [2024-07-22 18:39:48.610423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:36.831 [2024-07-22 18:39:48.615560] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.831 [2024-07-22 18:39:48.615640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.831 [2024-07-22 18:39:48.615661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:36.831 [2024-07-22 18:39:48.620724] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.831 [2024-07-22 18:39:48.620809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.831 [2024-07-22 18:39:48.620830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:36.831 [2024-07-22 18:39:48.625804] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.831 [2024-07-22 18:39:48.625921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.831 [2024-07-22 18:39:48.625943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:36.831 [2024-07-22 18:39:48.631615] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.831 [2024-07-22 18:39:48.631730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.831 [2024-07-22 18:39:48.631752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:36.831 [2024-07-22 18:39:48.636930] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.831 [2024-07-22 18:39:48.637033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.831 [2024-07-22 18:39:48.637055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:36.831 [2024-07-22 18:39:48.641765] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.831 [2024-07-22 18:39:48.641838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.831 [2024-07-22 18:39:48.641870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:36.831 [2024-07-22 18:39:48.647372] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.831 [2024-07-22 18:39:48.647449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.831 [2024-07-22 18:39:48.647468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:36.831 [2024-07-22 18:39:48.652039] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.831 [2024-07-22 18:39:48.652131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.831 [2024-07-22 18:39:48.652152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:36.831 [2024-07-22 18:39:48.657715] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.831 [2024-07-22 18:39:48.657792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.831 [2024-07-22 18:39:48.657813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:36.831 [2024-07-22 18:39:48.664054] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.831 [2024-07-22 18:39:48.664136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.831 [2024-07-22 18:39:48.664157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:36.831 [2024-07-22 18:39:48.668580] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.831 [2024-07-22 18:39:48.668632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.831 [2024-07-22 18:39:48.668652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:36.832 [2024-07-22 18:39:48.673858] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.832 [2024-07-22 18:39:48.673919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.832 [2024-07-22 18:39:48.673939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:36.832 [2024-07-22 18:39:48.680397] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.832 [2024-07-22 18:39:48.680464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.832 [2024-07-22 18:39:48.680482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:36.832 [2024-07-22 18:39:48.686305] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.832 [2024-07-22 18:39:48.686387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.832 [2024-07-22 18:39:48.686405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:36.832 [2024-07-22 18:39:48.692327] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.832 [2024-07-22 18:39:48.692415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.832 [2024-07-22 18:39:48.692434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:36.832 [2024-07-22 18:39:48.698505] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.832 [2024-07-22 18:39:48.698594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.832 [2024-07-22 18:39:48.698615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:36.832 [2024-07-22 18:39:48.704658] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.832 [2024-07-22 18:39:48.704784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.832 [2024-07-22 18:39:48.704807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:36.832 [2024-07-22 18:39:48.711093] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.832 [2024-07-22 18:39:48.711205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.832 [2024-07-22 18:39:48.711228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:36.832 [2024-07-22 18:39:48.717534] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.832 [2024-07-22 18:39:48.717635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.832 [2024-07-22 18:39:48.717658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:36.832 [2024-07-22 18:39:48.722304] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.832 [2024-07-22 18:39:48.722391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.832 [2024-07-22 18:39:48.722436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:36.832 [2024-07-22 18:39:48.728388] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.832 [2024-07-22 18:39:48.728468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.832 [2024-07-22 18:39:48.728488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:36.832 [2024-07-22 18:39:48.734884] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.832 [2024-07-22 18:39:48.734977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.832 [2024-07-22 18:39:48.734998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:36.832 [2024-07-22 18:39:48.739030] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.832 [2024-07-22 18:39:48.739098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.832 [2024-07-22 18:39:48.739118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:36.832 [2024-07-22 18:39:48.744650] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.832 [2024-07-22 18:39:48.744705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.832 [2024-07-22 18:39:48.744730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:36.832 [2024-07-22 18:39:48.749745] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.832 [2024-07-22 18:39:48.749807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.832 [2024-07-22 18:39:48.749826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:36.832 [2024-07-22 18:39:48.753963] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.832 [2024-07-22 18:39:48.754048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.832 [2024-07-22 18:39:48.754069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:36.832 [2024-07-22 18:39:48.759577] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.832 [2024-07-22 18:39:48.759641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.832 [2024-07-22 18:39:48.759659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:36.832 [2024-07-22 18:39:48.764196] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.832 [2024-07-22 18:39:48.764274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.832 [2024-07-22 18:39:48.764293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:36.832 [2024-07-22 18:39:48.769256] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.832 [2024-07-22 18:39:48.769340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.832 [2024-07-22 18:39:48.769360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:36.832 [2024-07-22 18:39:48.774950] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.832 [2024-07-22 18:39:48.775017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.832 [2024-07-22 18:39:48.775036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:36.832 [2024-07-22 18:39:48.779382] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.832 [2024-07-22 18:39:48.779447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.832 [2024-07-22 18:39:48.779466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:36.832 [2024-07-22 18:39:48.785285] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.832 [2024-07-22 18:39:48.785387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.832 [2024-07-22 18:39:48.785406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:36.832 [2024-07-22 18:39:48.789818] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.832 [2024-07-22 18:39:48.789914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.832 [2024-07-22 18:39:48.789935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:36.832 [2024-07-22 18:39:48.795592] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.832 [2024-07-22 18:39:48.795704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.832 [2024-07-22 18:39:48.795725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:36.832 [2024-07-22 18:39:48.802513] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.832 [2024-07-22 18:39:48.802620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.832 [2024-07-22 18:39:48.802641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:36.832 [2024-07-22 18:39:48.807389] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.832 [2024-07-22 18:39:48.807457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.832 [2024-07-22 18:39:48.807477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:36.832 [2024-07-22 18:39:48.813201] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.832 [2024-07-22 18:39:48.813301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.832 [2024-07-22 18:39:48.813321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:36.832 [2024-07-22 18:39:48.818736] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.832 [2024-07-22 18:39:48.818803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.833 [2024-07-22 18:39:48.818822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:36.833 [2024-07-22 18:39:48.823629] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.833 [2024-07-22 18:39:48.823692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.833 [2024-07-22 18:39:48.823711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:36.833 [2024-07-22 18:39:48.828149] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.833 [2024-07-22 18:39:48.828198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.833 [2024-07-22 18:39:48.828218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:36.833 [2024-07-22 18:39:48.834085] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.833 [2024-07-22 18:39:48.834131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.833 [2024-07-22 18:39:48.834151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:36.833 [2024-07-22 18:39:48.840637] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.833 [2024-07-22 18:39:48.840719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.833 [2024-07-22 18:39:48.840738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:36.833 [2024-07-22 18:39:48.845581] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:36.833 [2024-07-22 18:39:48.845678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.833 [2024-07-22 18:39:48.845697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:37.093 [2024-07-22 18:39:48.851435] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:37.093 [2024-07-22 18:39:48.851544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.093 [2024-07-22 18:39:48.851565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:37.093 [2024-07-22 18:39:48.858753] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:37.093 [2024-07-22 18:39:48.858901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.093 [2024-07-22 18:39:48.858924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:37.093 [2024-07-22 18:39:48.863391] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:37.093 [2024-07-22 18:39:48.863488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.093 [2024-07-22 18:39:48.863509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:37.093 [2024-07-22 18:39:48.869473] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:37.093 [2024-07-22 18:39:48.869585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.093 [2024-07-22 18:39:48.869608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:37.093 [2024-07-22 18:39:48.876091] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:37.093 [2024-07-22 18:39:48.876186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.093 [2024-07-22 18:39:48.876208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:37.093 [2024-07-22 18:39:48.880665] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:37.093 [2024-07-22 18:39:48.880739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.093 [2024-07-22 18:39:48.880759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:37.093 [2024-07-22 18:39:48.885920] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:37.093 [2024-07-22 18:39:48.885989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.093 [2024-07-22 18:39:48.886018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:37.093 [2024-07-22 18:39:48.890805] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:37.093 [2024-07-22 18:39:48.890878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.093 [2024-07-22 18:39:48.890897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:37.093 [2024-07-22 18:39:48.894941] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:37.093 [2024-07-22 18:39:48.895002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.093 [2024-07-22 18:39:48.895021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:37.093 [2024-07-22 18:39:48.899991] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:37.093 [2024-07-22 18:39:48.900060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.093 [2024-07-22 18:39:48.900078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:37.093 [2024-07-22 18:39:48.905318] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:37.093 [2024-07-22 18:39:48.905393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.093 [2024-07-22 18:39:48.905412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:37.093 [2024-07-22 18:39:48.909827] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:37.093 [2024-07-22 18:39:48.909901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.093 [2024-07-22 18:39:48.909920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:37.093 [2024-07-22 18:39:48.914898] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:37.093 [2024-07-22 18:39:48.914969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.093 [2024-07-22 18:39:48.914988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:37.093 [2024-07-22 18:39:48.919580] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:37.093 [2024-07-22 18:39:48.919655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.093 [2024-07-22 18:39:48.919674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:37.093 [2024-07-22 18:39:48.925036] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:37.093 [2024-07-22 18:39:48.925107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.093 [2024-07-22 18:39:48.925126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:37.093 [2024-07-22 18:39:48.929193] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:37.093 [2024-07-22 18:39:48.929257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.093 [2024-07-22 18:39:48.929276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:37.093 [2024-07-22 18:39:48.935152] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:37.093 [2024-07-22 18:39:48.935229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.093 [2024-07-22 18:39:48.935264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:37.093 [2024-07-22 18:39:48.940243] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:37.093 [2024-07-22 18:39:48.940318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.093 [2024-07-22 18:39:48.940338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:37.093 [2024-07-22 18:39:48.944650] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:37.093 [2024-07-22 18:39:48.944718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.093 [2024-07-22 18:39:48.944747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:37.093 [2024-07-22 18:39:48.950859] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:37.093 [2024-07-22 18:39:48.950965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.093 [2024-07-22 18:39:48.950987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:37.093 [2024-07-22 18:39:48.955008] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:37.093 [2024-07-22 18:39:48.955072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.093 [2024-07-22 18:39:48.955091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:37.094 [2024-07-22 18:39:48.960749] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:37.094 [2024-07-22 18:39:48.960830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.094 [2024-07-22 18:39:48.960891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:37.094 [2024-07-22 18:39:48.967710] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:37.094 [2024-07-22 18:39:48.967800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.094 [2024-07-22 18:39:48.967821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:37.094 [2024-07-22 18:39:48.974249] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:37.094 [2024-07-22 18:39:48.974319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.094 [2024-07-22 18:39:48.974341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:37.094 [2024-07-22 18:39:48.980672] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:37.094 [2024-07-22 18:39:48.980764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.094 [2024-07-22 18:39:48.980785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:37.094 [2024-07-22 18:39:48.986737] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:37.094 [2024-07-22 18:39:48.986833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.094 [2024-07-22 18:39:48.986883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:37.094 [2024-07-22 18:39:48.993573] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:37.094 [2024-07-22 18:39:48.993655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.094 [2024-07-22 18:39:48.993676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:37.094 [2024-07-22 18:39:48.999410] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:37.094 [2024-07-22 18:39:48.999486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.094 [2024-07-22 18:39:48.999506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:37.094 [2024-07-22 18:39:49.004891] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:37.094 [2024-07-22 18:39:49.004963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.094 [2024-07-22 18:39:49.004984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:37.094 [2024-07-22 18:39:49.008927] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:37.094 [2024-07-22 18:39:49.008993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.094 [2024-07-22 18:39:49.009013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:37.094 [2024-07-22 18:39:49.015289] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:37.094 [2024-07-22 18:39:49.015362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.094 [2024-07-22 18:39:49.015382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:37.094 [2024-07-22 18:39:49.021685] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:37.094 [2024-07-22 18:39:49.021767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.094 [2024-07-22 18:39:49.021787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:37.094 [2024-07-22 18:39:49.026265] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:37.094 [2024-07-22 18:39:49.026333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.094 [2024-07-22 18:39:49.026383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:37.094 [2024-07-22 18:39:49.031831] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:37.094 [2024-07-22 18:39:49.031921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.094 [2024-07-22 18:39:49.031943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:37.094 [2024-07-22 18:39:49.038294] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:37.094 [2024-07-22 18:39:49.038422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.094 [2024-07-22 18:39:49.038444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:37.094 [2024-07-22 18:39:49.043053] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:37.094 [2024-07-22 18:39:49.043135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.094 [2024-07-22 18:39:49.043157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:37.094 [2024-07-22 18:39:49.048834] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:37.094 [2024-07-22 18:39:49.048918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.094 [2024-07-22 18:39:49.048938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:37.094 [2024-07-22 18:39:49.055356] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:37.094 [2024-07-22 18:39:49.055454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.094 [2024-07-22 18:39:49.055474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:37.094 [2024-07-22 18:39:49.061746] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:37.094 [2024-07-22 18:39:49.061811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.094 [2024-07-22 18:39:49.061830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:37.094 [2024-07-22 18:39:49.066171] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:37.094 [2024-07-22 18:39:49.066233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.094 [2024-07-22 18:39:49.066253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:37.094 [2024-07-22 18:39:49.071527] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:37.094 [2024-07-22 18:39:49.071590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.094 [2024-07-22 18:39:49.071607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:37.094 [2024-07-22 18:39:49.077859] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:37.094 [2024-07-22 18:39:49.077919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.094 [2024-07-22 18:39:49.077938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:37.094 [2024-07-22 18:39:49.084205] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:37.094 [2024-07-22 18:39:49.084267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.094 [2024-07-22 18:39:49.084286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:37.094 [2024-07-22 18:39:49.088336] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:37.094 [2024-07-22 18:39:49.088383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.094 [2024-07-22 18:39:49.088402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:37.094 [2024-07-22 18:39:49.095421] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:37.094 [2024-07-22 18:39:49.095491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.094 [2024-07-22 18:39:49.095512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:37.094 [2024-07-22 18:39:49.102786] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:37.094 [2024-07-22 18:39:49.102905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.094 [2024-07-22 18:39:49.102929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:37.094 [2024-07-22 18:39:49.107478] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:37.095 [2024-07-22 18:39:49.107527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.095 [2024-07-22 18:39:49.107547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:37.354 [2024-07-22 18:39:49.113392] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:37.354 [2024-07-22 18:39:49.113442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.354 [2024-07-22 18:39:49.113461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:37.354 [2024-07-22 18:39:49.120131] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:37.354 [2024-07-22 18:39:49.120194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.354 [2024-07-22 18:39:49.120213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:37.354 [2024-07-22 18:39:49.124558] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:37.354 [2024-07-22 18:39:49.124620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.354 [2024-07-22 18:39:49.124653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:37.354 [2024-07-22 18:39:49.130006] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:37.354 [2024-07-22 18:39:49.130092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.354 [2024-07-22 18:39:49.130111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:37.354 [2024-07-22 18:39:49.134724] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:37.354 [2024-07-22 18:39:49.134793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.354 [2024-07-22 18:39:49.134812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:37.354 [2024-07-22 18:39:49.139886] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:37.354 [2024-07-22 18:39:49.139952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.354 [2024-07-22 18:39:49.139972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:37.354 [2024-07-22 18:39:49.145874] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:37.354 [2024-07-22 18:39:49.145935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.354 [2024-07-22 18:39:49.145954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:37.354 [2024-07-22 18:39:49.149879] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:37.354 [2024-07-22 18:39:49.149937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.354 [2024-07-22 18:39:49.149954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:37.354 [2024-07-22 18:39:49.156124] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:37.354 [2024-07-22 18:39:49.156184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.354 [2024-07-22 18:39:49.156202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:37.354 [2024-07-22 18:39:49.162557] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:37.354 [2024-07-22 18:39:49.162619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.354 [2024-07-22 18:39:49.162637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:37.354 00:32:37.354 Latency(us) 00:32:37.354 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:37.354 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:32:37.354 nvme0n1 : 2.00 5722.84 715.36 0.00 0.00 2790.74 804.31 7447.27 00:32:37.354 =================================================================================================================== 00:32:37.354 Total : 5722.84 715.36 0.00 0.00 2790.74 804.31 7447.27 00:32:37.354 0 00:32:37.354 18:39:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:32:37.354 18:39:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:32:37.354 18:39:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:32:37.354 18:39:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:32:37.354 | .driver_specific 00:32:37.354 | .nvme_error 00:32:37.354 | .status_code 00:32:37.354 | .command_transient_transport_error' 00:32:37.613 18:39:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 369 > 0 )) 00:32:37.613 18:39:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 105007 00:32:37.613 18:39:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 105007 ']' 00:32:37.613 18:39:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 105007 00:32:37.613 18:39:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:32:37.613 18:39:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:37.613 18:39:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 105007 00:32:37.613 18:39:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:32:37.613 18:39:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:32:37.613 killing process with pid 105007 00:32:37.613 18:39:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 105007' 00:32:37.613 Received shutdown signal, test time was about 2.000000 seconds 00:32:37.613 00:32:37.613 Latency(us) 00:32:37.613 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:37.613 =================================================================================================================== 00:32:37.613 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:37.613 18:39:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 105007 00:32:37.613 18:39:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 105007 00:32:38.986 18:39:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:32:38.986 18:39:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:32:38.986 18:39:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:32:38.986 18:39:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:32:38.986 18:39:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:32:38.986 18:39:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=105104 00:32:38.986 18:39:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 105104 /var/tmp/bperf.sock 00:32:38.986 18:39:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:32:38.986 18:39:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 105104 ']' 00:32:38.986 18:39:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:38.986 18:39:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:38.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:38.987 18:39:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:38.987 18:39:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:38.987 18:39:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:38.987 [2024-07-22 18:39:50.851866] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:32:38.987 [2024-07-22 18:39:50.852040] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105104 ] 00:32:39.244 [2024-07-22 18:39:51.018409] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:39.512 [2024-07-22 18:39:51.289730] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:39.787 18:39:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:39.787 18:39:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:32:39.787 18:39:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:39.787 18:39:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:40.046 18:39:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:32:40.046 18:39:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.046 18:39:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:40.304 18:39:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.304 18:39:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:40.304 18:39:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:40.563 nvme0n1 00:32:40.563 18:39:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:32:40.563 18:39:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.563 18:39:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:40.563 18:39:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.563 18:39:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:32:40.563 18:39:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:40.563 Running I/O for 2 seconds... 00:32:40.563 [2024-07-22 18:39:52.525159] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f6458 00:32:40.563 [2024-07-22 18:39:52.526607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:12936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.563 [2024-07-22 18:39:52.526669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:32:40.563 [2024-07-22 18:39:52.543072] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e4de8 00:32:40.563 [2024-07-22 18:39:52.545223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:13296 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.563 [2024-07-22 18:39:52.545279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:32:40.563 [2024-07-22 18:39:52.553766] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e01f8 00:32:40.563 [2024-07-22 18:39:52.554726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:25320 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.563 [2024-07-22 18:39:52.554769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:32:40.563 [2024-07-22 18:39:52.571704] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f1868 00:32:40.563 [2024-07-22 18:39:52.573546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:10697 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.563 [2024-07-22 18:39:52.573591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:40.823 [2024-07-22 18:39:52.585938] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fc998 00:32:40.823 [2024-07-22 18:39:52.587414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:15549 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.823 [2024-07-22 18:39:52.587461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:40.823 [2024-07-22 18:39:52.601012] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f4298 00:32:40.823 [2024-07-22 18:39:52.602468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:13520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.823 [2024-07-22 18:39:52.602529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:32:40.823 [2024-07-22 18:39:52.619716] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e2c28 00:32:40.823 [2024-07-22 18:39:52.621998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:8944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.823 [2024-07-22 18:39:52.622052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:40.823 [2024-07-22 18:39:52.630723] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e23b8 00:32:40.823 [2024-07-22 18:39:52.631783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:13615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.823 [2024-07-22 18:39:52.631871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:32:40.823 [2024-07-22 18:39:52.648963] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f3a28 00:32:40.823 [2024-07-22 18:39:52.650857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:964 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.823 [2024-07-22 18:39:52.650904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:32:40.823 [2024-07-22 18:39:52.663141] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fa7d8 00:32:40.823 [2024-07-22 18:39:52.664714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:8705 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.823 [2024-07-22 18:39:52.664773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:32:40.823 [2024-07-22 18:39:52.677568] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f20d8 00:32:40.823 [2024-07-22 18:39:52.679108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:20167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.823 [2024-07-22 18:39:52.679169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:32:40.823 [2024-07-22 18:39:52.694869] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e0a68 00:32:40.823 [2024-07-22 18:39:52.697098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:7874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.823 [2024-07-22 18:39:52.697155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:32:40.823 [2024-07-22 18:39:52.704986] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e4578 00:32:40.823 [2024-07-22 18:39:52.706145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:24005 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.823 [2024-07-22 18:39:52.706188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:32:40.823 [2024-07-22 18:39:52.722218] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f5be8 00:32:40.823 [2024-07-22 18:39:52.724155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:21444 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.823 [2024-07-22 18:39:52.724215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:40.823 [2024-07-22 18:39:52.735601] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f8618 00:32:40.823 [2024-07-22 18:39:52.737107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16560 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.823 [2024-07-22 18:39:52.737150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:40.823 [2024-07-22 18:39:52.749465] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eff18 00:32:40.823 [2024-07-22 18:39:52.751068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:7579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.823 [2024-07-22 18:39:52.751129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:32:40.823 [2024-07-22 18:39:52.766760] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195de8a8 00:32:40.823 [2024-07-22 18:39:52.769328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:17558 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.823 [2024-07-22 18:39:52.769390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.823 [2024-07-22 18:39:52.777526] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e6738 00:32:40.823 [2024-07-22 18:39:52.778834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:16270 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.823 [2024-07-22 18:39:52.778919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:40.823 [2024-07-22 18:39:52.794817] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fef90 00:32:40.823 [2024-07-22 18:39:52.796827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:22934 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.823 [2024-07-22 18:39:52.796896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:32:40.823 [2024-07-22 18:39:52.805168] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f7538 00:32:40.823 [2024-07-22 18:39:52.806071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17670 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.823 [2024-07-22 18:39:52.806114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:32:40.823 [2024-07-22 18:39:52.823167] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195edd58 00:32:40.823 [2024-07-22 18:39:52.824957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:19393 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.823 [2024-07-22 18:39:52.825032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:32:40.823 [2024-07-22 18:39:52.837142] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e01f8 00:32:40.823 [2024-07-22 18:39:52.838486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:3586 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:40.823 [2024-07-22 18:39:52.838546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:32:41.082 [2024-07-22 18:39:52.851519] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e88f8 00:32:41.082 [2024-07-22 18:39:52.852863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:20251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.082 [2024-07-22 18:39:52.852948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:32:41.082 [2024-07-22 18:39:52.868901] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fc998 00:32:41.082 [2024-07-22 18:39:52.871069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:20096 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.082 [2024-07-22 18:39:52.871115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:41.082 [2024-07-22 18:39:52.879440] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f96f8 00:32:41.083 [2024-07-22 18:39:52.880431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:2685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.083 [2024-07-22 18:39:52.880489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:32:41.083 [2024-07-22 18:39:52.897252] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ebb98 00:32:41.083 [2024-07-22 18:39:52.899056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:19700 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.083 [2024-07-22 18:39:52.899102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:32:41.083 [2024-07-22 18:39:52.910957] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e23b8 00:32:41.083 [2024-07-22 18:39:52.912396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:9614 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.083 [2024-07-22 18:39:52.912454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:32:41.083 [2024-07-22 18:39:52.925775] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eaab8 00:32:41.083 [2024-07-22 18:39:52.927283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:9139 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.083 [2024-07-22 18:39:52.927341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:32:41.083 [2024-07-22 18:39:52.943453] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fa7d8 00:32:41.083 [2024-07-22 18:39:52.945715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:23984 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.083 [2024-07-22 18:39:52.945773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:32:41.083 [2024-07-22 18:39:52.954183] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fb8b8 00:32:41.083 [2024-07-22 18:39:52.955348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:11623 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.083 [2024-07-22 18:39:52.955406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:32:41.083 [2024-07-22 18:39:52.971180] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e99d8 00:32:41.083 [2024-07-22 18:39:52.973018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:21999 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.083 [2024-07-22 18:39:52.973075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:32:41.083 [2024-07-22 18:39:52.984144] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e4578 00:32:41.083 [2024-07-22 18:39:52.985616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:19579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.083 [2024-07-22 18:39:52.985687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:32:41.083 [2024-07-22 18:39:52.997800] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ecc78 00:32:41.083 [2024-07-22 18:39:52.999367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:4594 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.083 [2024-07-22 18:39:52.999424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:32:41.083 [2024-07-22 18:39:53.014253] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f8618 00:32:41.083 [2024-07-22 18:39:53.016447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:23951 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.083 [2024-07-22 18:39:53.016504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:41.083 [2024-07-22 18:39:53.024070] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fda78 00:32:41.083 [2024-07-22 18:39:53.025283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:4076 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.083 [2024-07-22 18:39:53.025356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:32:41.083 [2024-07-22 18:39:53.043078] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e7818 00:32:41.083 [2024-07-22 18:39:53.045058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:3300 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.083 [2024-07-22 18:39:53.045106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:32:41.083 [2024-07-22 18:39:53.056124] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e6738 00:32:41.083 [2024-07-22 18:39:53.057770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:10635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.083 [2024-07-22 18:39:53.057830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:32:41.083 [2024-07-22 18:39:53.070022] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eee38 00:32:41.083 [2024-07-22 18:39:53.071612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:13006 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.083 [2024-07-22 18:39:53.071670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:41.083 [2024-07-22 18:39:53.083131] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f7538 00:32:41.083 [2024-07-22 18:39:53.084424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16359 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.083 [2024-07-22 18:39:53.084485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:41.083 [2024-07-22 18:39:53.097349] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ddc00 00:32:41.083 [2024-07-22 18:39:53.098698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:1831 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.083 [2024-07-22 18:39:53.098763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:32:41.342 [2024-07-22 18:39:53.114165] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e01f8 00:32:41.342 [2024-07-22 18:39:53.116364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:20083 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.342 [2024-07-22 18:39:53.116421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:32:41.342 [2024-07-22 18:39:53.124215] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5ec8 00:32:41.342 [2024-07-22 18:39:53.125190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:3002 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.342 [2024-07-22 18:39:53.125246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:32:41.342 [2024-07-22 18:39:53.140791] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f0ff8 00:32:41.342 [2024-07-22 18:39:53.142567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:2285 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.342 [2024-07-22 18:39:53.142626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:32:41.342 [2024-07-22 18:39:53.153787] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f96f8 00:32:41.342 [2024-07-22 18:39:53.155111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:25011 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.342 [2024-07-22 18:39:53.155168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:41.342 [2024-07-22 18:39:53.167211] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f4b08 00:32:41.342 [2024-07-22 18:39:53.168615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:18512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.342 [2024-07-22 18:39:53.168656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:32:41.342 [2024-07-22 18:39:53.185242] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e23b8 00:32:41.342 [2024-07-22 18:39:53.187463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:4629 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.342 [2024-07-22 18:39:53.187510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:32:41.342 [2024-07-22 18:39:53.195798] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e3d08 00:32:41.342 [2024-07-22 18:39:53.196848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:18111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.342 [2024-07-22 18:39:53.196912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:32:41.342 [2024-07-22 18:39:53.213066] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f31b8 00:32:41.342 [2024-07-22 18:39:53.214873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:290 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.342 [2024-07-22 18:39:53.214931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:32:41.342 [2024-07-22 18:39:53.226235] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fb8b8 00:32:41.342 [2024-07-22 18:39:53.227703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:1658 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.342 [2024-07-22 18:39:53.227746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:32:41.342 [2024-07-22 18:39:53.240739] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f2948 00:32:41.342 [2024-07-22 18:39:53.242263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:17618 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.343 [2024-07-22 18:39:53.242308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:32:41.343 [2024-07-22 18:39:53.258290] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e4578 00:32:41.343 [2024-07-22 18:39:53.260579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.343 [2024-07-22 18:39:53.260638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:32:41.343 [2024-07-22 18:39:53.268852] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e1b48 00:32:41.343 [2024-07-22 18:39:53.270031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:9791 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.343 [2024-07-22 18:39:53.270073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:32:41.343 [2024-07-22 18:39:53.286677] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f5378 00:32:41.343 [2024-07-22 18:39:53.288623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:11770 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.343 [2024-07-22 18:39:53.288681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:32:41.343 [2024-07-22 18:39:53.300040] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fda78 00:32:41.343 [2024-07-22 18:39:53.301604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:24086 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.343 [2024-07-22 18:39:53.301662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:32:41.343 [2024-07-22 18:39:53.313613] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ebfd0 00:32:41.343 [2024-07-22 18:39:53.314820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:25396 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.343 [2024-07-22 18:39:53.314910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:32:41.343 [2024-07-22 18:39:53.327851] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e23b8 00:32:41.343 [2024-07-22 18:39:53.328874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:4102 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.343 [2024-07-22 18:39:53.328916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:32:41.343 [2024-07-22 18:39:53.344795] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fac10 00:32:41.343 [2024-07-22 18:39:53.346068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:25477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.343 [2024-07-22 18:39:53.346112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:41.601 [2024-07-22 18:39:53.359656] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f6890 00:32:41.601 [2024-07-22 18:39:53.361443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:3379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.601 [2024-07-22 18:39:53.361486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:32:41.601 [2024-07-22 18:39:53.373265] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f1868 00:32:41.601 [2024-07-22 18:39:53.374719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:18117 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.601 [2024-07-22 18:39:53.374780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:32:41.601 [2024-07-22 18:39:53.387717] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e84c0 00:32:41.601 [2024-07-22 18:39:53.389183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:17826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.601 [2024-07-22 18:39:53.389226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:32:41.601 [2024-07-22 18:39:53.405287] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f0350 00:32:41.601 [2024-07-22 18:39:53.407527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.601 [2024-07-22 18:39:53.407588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:32:41.601 [2024-07-22 18:39:53.415753] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eea00 00:32:41.601 [2024-07-22 18:39:53.416885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:19629 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.601 [2024-07-22 18:39:53.416951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:32:41.601 [2024-07-22 18:39:53.433092] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195de470 00:32:41.601 [2024-07-22 18:39:53.435038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:18688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.601 [2024-07-22 18:39:53.435098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:32:41.602 [2024-07-22 18:39:53.446859] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e38d0 00:32:41.602 [2024-07-22 18:39:53.448404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:20592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.602 [2024-07-22 18:39:53.448462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:32:41.602 [2024-07-22 18:39:53.460958] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f4f40 00:32:41.602 [2024-07-22 18:39:53.462545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:15887 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.602 [2024-07-22 18:39:53.462605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:32:41.602 [2024-07-22 18:39:53.478813] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195feb58 00:32:41.602 [2024-07-22 18:39:53.481140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:12344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.602 [2024-07-22 18:39:53.481185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:41.602 [2024-07-22 18:39:53.489178] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ec840 00:32:41.602 [2024-07-22 18:39:53.490373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:10004 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.602 [2024-07-22 18:39:53.490432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:32:41.602 [2024-07-22 18:39:53.507423] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f6cc8 00:32:41.602 [2024-07-22 18:39:53.509437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:506 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.602 [2024-07-22 18:39:53.509483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:32:41.602 [2024-07-22 18:39:53.521354] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e1710 00:32:41.602 [2024-07-22 18:39:53.523085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:14572 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.602 [2024-07-22 18:39:53.523132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:41.602 [2024-07-22 18:39:53.536441] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f2d80 00:32:41.602 [2024-07-22 18:39:53.538127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:14008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.602 [2024-07-22 18:39:53.538173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:41.602 [2024-07-22 18:39:53.550493] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ff3c8 00:32:41.602 [2024-07-22 18:39:53.551789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:3809 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.602 [2024-07-22 18:39:53.551860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:41.602 [2024-07-22 18:39:53.564884] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fd640 00:32:41.602 [2024-07-22 18:39:53.566198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:22011 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.602 [2024-07-22 18:39:53.566245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:32:41.602 [2024-07-22 18:39:53.582243] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f8e88 00:32:41.602 [2024-07-22 18:39:53.584347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:17813 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.602 [2024-07-22 18:39:53.584409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:32:41.602 [2024-07-22 18:39:53.592752] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e27f0 00:32:41.602 [2024-07-22 18:39:53.593721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:15422 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.602 [2024-07-22 18:39:53.593785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:32:41.602 [2024-07-22 18:39:53.610768] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e9e10 00:32:41.602 [2024-07-22 18:39:53.612517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:16630 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.602 [2024-07-22 18:39:53.612562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:32:41.861 [2024-07-22 18:39:53.624944] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f0bc0 00:32:41.861 [2024-07-22 18:39:53.626349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:22624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.861 [2024-07-22 18:39:53.626394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:32:41.861 [2024-07-22 18:39:53.639406] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fb480 00:32:41.861 [2024-07-22 18:39:53.640773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:22830 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.861 [2024-07-22 18:39:53.640816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:32:41.861 [2024-07-22 18:39:53.657046] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f1868 00:32:41.861 [2024-07-22 18:39:53.659299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:24592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.861 [2024-07-22 18:39:53.659344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:32:41.861 [2024-07-22 18:39:53.667396] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e49b0 00:32:41.861 [2024-07-22 18:39:53.668476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:16018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.861 [2024-07-22 18:39:53.668533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:32:41.861 [2024-07-22 18:39:53.684945] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fa3a0 00:32:41.861 [2024-07-22 18:39:53.686821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:5956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.861 [2024-07-22 18:39:53.686907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:32:41.861 [2024-07-22 18:39:53.698587] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eea00 00:32:41.861 [2024-07-22 18:39:53.700089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:10385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.861 [2024-07-22 18:39:53.700149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:32:41.861 [2024-07-22 18:39:53.712709] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e8d30 00:32:41.861 [2024-07-22 18:39:53.714238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:22177 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.861 [2024-07-22 18:39:53.714282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:32:41.861 [2024-07-22 18:39:53.730156] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e38d0 00:32:41.861 [2024-07-22 18:39:53.732403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:9125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.861 [2024-07-22 18:39:53.732462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:32:41.861 [2024-07-22 18:39:53.740355] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f2948 00:32:41.861 [2024-07-22 18:39:53.741515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:3314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.861 [2024-07-22 18:39:53.741573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:32:41.861 [2024-07-22 18:39:53.757395] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fc560 00:32:41.861 [2024-07-22 18:39:53.759265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:11844 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.861 [2024-07-22 18:39:53.759325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:32:41.861 [2024-07-22 18:39:53.770665] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ec840 00:32:41.861 [2024-07-22 18:39:53.772202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:23444 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.861 [2024-07-22 18:39:53.772274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:41.861 [2024-07-22 18:39:53.784464] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eaef0 00:32:41.861 [2024-07-22 18:39:53.786075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:12495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.861 [2024-07-22 18:39:53.786116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:32:41.861 [2024-07-22 18:39:53.801364] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e1710 00:32:41.861 [2024-07-22 18:39:53.803641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:20967 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.861 [2024-07-22 18:39:53.803702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:41.861 [2024-07-22 18:39:53.811415] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f7da8 00:32:41.861 [2024-07-22 18:39:53.812730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:16513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.861 [2024-07-22 18:39:53.812802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:32:41.861 [2024-07-22 18:39:53.828967] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ff3c8 00:32:41.861 [2024-07-22 18:39:53.831015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21436 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.861 [2024-07-22 18:39:53.831075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:41.861 [2024-07-22 18:39:53.839556] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e6738 00:32:41.861 [2024-07-22 18:39:53.840418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:20831 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.861 [2024-07-22 18:39:53.840461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:32:41.861 [2024-07-22 18:39:53.857980] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f3e60 00:32:41.861 [2024-07-22 18:39:53.859743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:17100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.861 [2024-07-22 18:39:53.859802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:32:41.861 [2024-07-22 18:39:53.871894] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e27f0 00:32:41.861 [2024-07-22 18:39:53.873376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:24364 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.862 [2024-07-22 18:39:53.873434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:32:42.121 [2024-07-22 18:39:53.886143] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195dfdc0 00:32:42.121 [2024-07-22 18:39:53.887493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:12062 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.121 [2024-07-22 18:39:53.887537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:32:42.121 [2024-07-22 18:39:53.904136] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f0bc0 00:32:42.121 [2024-07-22 18:39:53.906336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:341 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.121 [2024-07-22 18:39:53.906381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:32:42.121 [2024-07-22 18:39:53.914527] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ed0b0 00:32:42.121 [2024-07-22 18:39:53.915547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:4735 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.121 [2024-07-22 18:39:53.915590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:32:42.121 [2024-07-22 18:39:53.932358] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f6020 00:32:42.121 [2024-07-22 18:39:53.934134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:13792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.121 [2024-07-22 18:39:53.934180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:32:42.121 [2024-07-22 18:39:53.946676] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e49b0 00:32:42.121 [2024-07-22 18:39:53.948161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:8941 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.121 [2024-07-22 18:39:53.948204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:32:42.121 [2024-07-22 18:39:53.961184] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e7c50 00:32:42.121 [2024-07-22 18:39:53.962652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:2911 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.121 [2024-07-22 18:39:53.962712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:32:42.121 [2024-07-22 18:39:53.979231] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eea00 00:32:42.121 [2024-07-22 18:39:53.981548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:11274 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.121 [2024-07-22 18:39:53.981596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:32:42.121 [2024-07-22 18:39:53.990254] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ef270 00:32:42.121 [2024-07-22 18:39:53.991381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:3553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.121 [2024-07-22 18:39:53.991424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:32:42.121 [2024-07-22 18:39:54.008368] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f7100 00:32:42.121 [2024-07-22 18:39:54.010084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:12171 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.121 [2024-07-22 18:39:54.010128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:32:42.121 [2024-07-22 18:39:54.022073] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fc998 00:32:42.121 [2024-07-22 18:39:54.023518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:11998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.121 [2024-07-22 18:39:54.023576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:32:42.121 [2024-07-22 18:39:54.036141] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195efae0 00:32:42.121 [2024-07-22 18:39:54.037459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:13481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.121 [2024-07-22 18:39:54.037503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:32:42.121 [2024-07-22 18:39:54.051203] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e3060 00:32:42.121 [2024-07-22 18:39:54.052525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:10281 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.121 [2024-07-22 18:39:54.052578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:32:42.121 [2024-07-22 18:39:54.066274] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ee190 00:32:42.121 [2024-07-22 18:39:54.067599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:5774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.121 [2024-07-22 18:39:54.067665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:32:42.121 [2024-07-22 18:39:54.082356] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fac10 00:32:42.121 [2024-07-22 18:39:54.084076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:8673 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.121 [2024-07-22 18:39:54.084137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:32:42.121 [2024-07-22 18:39:54.096587] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5ec8 00:32:42.121 [2024-07-22 18:39:54.097973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:6599 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.121 [2024-07-22 18:39:54.098026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:32:42.121 [2024-07-22 18:39:54.111255] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e38d0 00:32:42.121 [2024-07-22 18:39:54.112580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:10079 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.121 [2024-07-22 18:39:54.112624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:32:42.121 [2024-07-22 18:39:54.129377] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f0ff8 00:32:42.121 [2024-07-22 18:39:54.131528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:23735 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.121 [2024-07-22 18:39:54.131578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:42.380 [2024-07-22 18:39:54.140108] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ef270 00:32:42.380 [2024-07-22 18:39:54.141078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:17191 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.380 [2024-07-22 18:39:54.141121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:32:42.380 [2024-07-22 18:39:54.157828] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195df550 00:32:42.381 [2024-07-22 18:39:54.159592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:18177 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.381 [2024-07-22 18:39:54.159652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:42.381 [2024-07-22 18:39:54.171576] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eff18 00:32:42.381 [2024-07-22 18:39:54.173013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:18906 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.381 [2024-07-22 18:39:54.173057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:32:42.381 [2024-07-22 18:39:54.186141] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f6020 00:32:42.381 [2024-07-22 18:39:54.187566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:15355 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.381 [2024-07-22 18:39:54.187611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:32:42.381 [2024-07-22 18:39:54.204007] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f20d8 00:32:42.381 [2024-07-22 18:39:54.206290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:21537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.381 [2024-07-22 18:39:54.206340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:32:42.381 [2024-07-22 18:39:54.215189] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f20d8 00:32:42.381 [2024-07-22 18:39:54.216279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:9249 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.381 [2024-07-22 18:39:54.216326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:32:42.381 [2024-07-22 18:39:54.233615] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f6020 00:32:42.381 [2024-07-22 18:39:54.235610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:14242 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.381 [2024-07-22 18:39:54.235689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:32:42.381 [2024-07-22 18:39:54.247952] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f2d80 00:32:42.381 [2024-07-22 18:39:54.249528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:10621 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.381 [2024-07-22 18:39:54.249578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:32:42.381 [2024-07-22 18:39:54.262878] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195df550 00:32:42.381 [2024-07-22 18:39:54.264428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:17134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.381 [2024-07-22 18:39:54.264479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:32:42.381 [2024-07-22 18:39:54.281152] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ef270 00:32:42.381 [2024-07-22 18:39:54.283586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:15138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.381 [2024-07-22 18:39:54.283639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:32:42.381 [2024-07-22 18:39:54.292019] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f0ff8 00:32:42.381 [2024-07-22 18:39:54.293211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:16494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.381 [2024-07-22 18:39:54.293259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:32:42.381 [2024-07-22 18:39:54.310373] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e38d0 00:32:42.381 [2024-07-22 18:39:54.312439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:913 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.381 [2024-07-22 18:39:54.312491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:32:42.381 [2024-07-22 18:39:54.324194] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e12d8 00:32:42.381 [2024-07-22 18:39:54.325858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:14428 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.381 [2024-07-22 18:39:54.325918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:42.381 [2024-07-22 18:39:54.338661] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fac10 00:32:42.381 [2024-07-22 18:39:54.340360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:7729 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.381 [2024-07-22 18:39:54.340406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:32:42.381 [2024-07-22 18:39:54.356153] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f6cc8 00:32:42.381 [2024-07-22 18:39:54.358543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:5543 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.381 [2024-07-22 18:39:54.358591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.381 [2024-07-22 18:39:54.366622] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fe2e8 00:32:42.381 [2024-07-22 18:39:54.367941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:9146 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.381 [2024-07-22 18:39:54.367987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:42.381 [2024-07-22 18:39:54.384124] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5ec8 00:32:42.381 [2024-07-22 18:39:54.386324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:1516 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.381 [2024-07-22 18:39:54.386391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:32:42.381 [2024-07-22 18:39:54.394820] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e1b48 00:32:42.381 [2024-07-22 18:39:54.395767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:17833 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.381 [2024-07-22 18:39:54.395829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:32:42.721 [2024-07-22 18:39:54.413165] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fcdd0 00:32:42.721 [2024-07-22 18:39:54.414987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:7266 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.721 [2024-07-22 18:39:54.415042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:42.721 [2024-07-22 18:39:54.426631] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ef270 00:32:42.721 [2024-07-22 18:39:54.428078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:8364 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.721 [2024-07-22 18:39:54.428124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:32:42.721 [2024-07-22 18:39:54.441072] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ec840 00:32:42.721 [2024-07-22 18:39:54.442506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:18380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.721 [2024-07-22 18:39:54.442583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:32:42.721 [2024-07-22 18:39:54.458686] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eff18 00:32:42.721 [2024-07-22 18:39:54.460917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9749 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.721 [2024-07-22 18:39:54.460961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:32:42.721 [2024-07-22 18:39:54.468828] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f35f0 00:32:42.721 [2024-07-22 18:39:54.469855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:12136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.721 [2024-07-22 18:39:54.469929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:32:42.721 [2024-07-22 18:39:54.485589] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f0788 00:32:42.721 [2024-07-22 18:39:54.487463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:5194 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.721 [2024-07-22 18:39:54.487508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:32:42.721 [2024-07-22 18:39:54.498697] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f20d8 00:32:42.721 [2024-07-22 18:39:54.500169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:3806 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.721 [2024-07-22 18:39:54.500216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:32:42.721 00:32:42.721 Latency(us) 00:32:42.721 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:42.721 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:42.721 nvme0n1 : 2.00 17247.98 67.37 0.00 0.00 7412.55 3619.37 18230.92 00:32:42.721 =================================================================================================================== 00:32:42.721 Total : 17247.98 67.37 0.00 0.00 7412.55 3619.37 18230.92 00:32:42.721 0 00:32:42.721 18:39:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:32:42.721 18:39:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:32:42.721 18:39:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:32:42.721 18:39:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:32:42.721 | .driver_specific 00:32:42.721 | .nvme_error 00:32:42.721 | .status_code 00:32:42.721 | .command_transient_transport_error' 00:32:42.980 18:39:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 135 > 0 )) 00:32:42.980 18:39:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 105104 00:32:42.980 18:39:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 105104 ']' 00:32:42.980 18:39:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 105104 00:32:42.980 18:39:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:32:42.980 18:39:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:42.980 18:39:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 105104 00:32:42.980 18:39:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:32:42.980 killing process with pid 105104 00:32:42.980 Received shutdown signal, test time was about 2.000000 seconds 00:32:42.980 00:32:42.980 Latency(us) 00:32:42.980 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:42.980 =================================================================================================================== 00:32:42.980 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:42.980 18:39:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:32:42.980 18:39:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 105104' 00:32:42.980 18:39:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 105104 00:32:42.980 18:39:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 105104 00:32:43.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:43.916 18:39:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:32:43.916 18:39:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:32:43.916 18:39:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:32:43.916 18:39:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:32:43.916 18:39:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:32:43.916 18:39:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=105207 00:32:43.916 18:39:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 105207 /var/tmp/bperf.sock 00:32:43.916 18:39:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:32:43.916 18:39:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 105207 ']' 00:32:43.916 18:39:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:43.916 18:39:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:43.916 18:39:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:43.916 18:39:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:43.916 18:39:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:44.175 [2024-07-22 18:39:55.995110] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:32:44.175 [2024-07-22 18:39:55.995609] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105207 ] 00:32:44.175 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:44.175 Zero copy mechanism will not be used. 00:32:44.175 [2024-07-22 18:39:56.170085] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:44.434 [2024-07-22 18:39:56.441431] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:45.000 18:39:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:45.000 18:39:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:32:45.000 18:39:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:45.000 18:39:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:45.259 18:39:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:32:45.259 18:39:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.259 18:39:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:45.259 18:39:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.259 18:39:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:45.259 18:39:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:45.517 nvme0n1 00:32:45.517 18:39:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:32:45.517 18:39:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.517 18:39:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:45.517 18:39:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.517 18:39:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:32:45.517 18:39:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:45.777 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:45.777 Zero copy mechanism will not be used. 00:32:45.777 Running I/O for 2 seconds... 00:32:45.777 [2024-07-22 18:39:57.609688] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:45.777 [2024-07-22 18:39:57.610153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.777 [2024-07-22 18:39:57.610210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:45.777 [2024-07-22 18:39:57.616189] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:45.777 [2024-07-22 18:39:57.616567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.777 [2024-07-22 18:39:57.616618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:45.777 [2024-07-22 18:39:57.623008] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:45.777 [2024-07-22 18:39:57.623409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.777 [2024-07-22 18:39:57.623458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:45.777 [2024-07-22 18:39:57.629827] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:45.777 [2024-07-22 18:39:57.630248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.777 [2024-07-22 18:39:57.630301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.777 [2024-07-22 18:39:57.636058] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:45.777 [2024-07-22 18:39:57.636389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.777 [2024-07-22 18:39:57.636440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:45.777 [2024-07-22 18:39:57.641922] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:45.777 [2024-07-22 18:39:57.642273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.777 [2024-07-22 18:39:57.642323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:45.777 [2024-07-22 18:39:57.647643] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:45.777 [2024-07-22 18:39:57.647977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.777 [2024-07-22 18:39:57.648022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:45.777 [2024-07-22 18:39:57.653319] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:45.777 [2024-07-22 18:39:57.653625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.777 [2024-07-22 18:39:57.653710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.777 [2024-07-22 18:39:57.659188] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:45.777 [2024-07-22 18:39:57.659497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.777 [2024-07-22 18:39:57.659544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:45.777 [2024-07-22 18:39:57.664890] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:45.777 [2024-07-22 18:39:57.665225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.777 [2024-07-22 18:39:57.665289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:45.777 [2024-07-22 18:39:57.670638] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:45.777 [2024-07-22 18:39:57.670942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.777 [2024-07-22 18:39:57.670984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:45.777 [2024-07-22 18:39:57.676291] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:45.777 [2024-07-22 18:39:57.676608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.777 [2024-07-22 18:39:57.676656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.777 [2024-07-22 18:39:57.681831] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:45.777 [2024-07-22 18:39:57.682211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.777 [2024-07-22 18:39:57.682261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:45.777 [2024-07-22 18:39:57.687506] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:45.777 [2024-07-22 18:39:57.687828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.777 [2024-07-22 18:39:57.687893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:45.777 [2024-07-22 18:39:57.693113] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:45.777 [2024-07-22 18:39:57.693446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.777 [2024-07-22 18:39:57.693497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:45.777 [2024-07-22 18:39:57.698669] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:45.777 [2024-07-22 18:39:57.699008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.777 [2024-07-22 18:39:57.699056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.777 [2024-07-22 18:39:57.704418] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:45.777 [2024-07-22 18:39:57.704722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.777 [2024-07-22 18:39:57.704770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:45.777 [2024-07-22 18:39:57.710314] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:45.777 [2024-07-22 18:39:57.710675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.777 [2024-07-22 18:39:57.710741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:45.777 [2024-07-22 18:39:57.716450] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:45.777 [2024-07-22 18:39:57.716789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.777 [2024-07-22 18:39:57.716849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:45.777 [2024-07-22 18:39:57.722642] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:45.777 [2024-07-22 18:39:57.723021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.777 [2024-07-22 18:39:57.723076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.777 [2024-07-22 18:39:57.729010] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:45.778 [2024-07-22 18:39:57.729363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.778 [2024-07-22 18:39:57.729413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:45.778 [2024-07-22 18:39:57.735241] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:45.778 [2024-07-22 18:39:57.735504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.778 [2024-07-22 18:39:57.735537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:45.778 [2024-07-22 18:39:57.741546] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:45.778 [2024-07-22 18:39:57.741898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.778 [2024-07-22 18:39:57.741967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:45.778 [2024-07-22 18:39:57.747951] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:45.778 [2024-07-22 18:39:57.748228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.778 [2024-07-22 18:39:57.748277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.778 [2024-07-22 18:39:57.754064] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:45.778 [2024-07-22 18:39:57.754370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.778 [2024-07-22 18:39:57.754419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:45.778 [2024-07-22 18:39:57.760271] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:45.778 [2024-07-22 18:39:57.760590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.778 [2024-07-22 18:39:57.760639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:45.778 [2024-07-22 18:39:57.766424] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:45.778 [2024-07-22 18:39:57.766751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.778 [2024-07-22 18:39:57.766799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:45.778 [2024-07-22 18:39:57.772348] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:45.778 [2024-07-22 18:39:57.772665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.778 [2024-07-22 18:39:57.772714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.778 [2024-07-22 18:39:57.778175] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:45.778 [2024-07-22 18:39:57.778496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.778 [2024-07-22 18:39:57.778546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:45.778 [2024-07-22 18:39:57.784525] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:45.778 [2024-07-22 18:39:57.784886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.778 [2024-07-22 18:39:57.784918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:45.778 [2024-07-22 18:39:57.790938] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:45.778 [2024-07-22 18:39:57.791287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.778 [2024-07-22 18:39:57.791336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:46.037 [2024-07-22 18:39:57.797168] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.037 [2024-07-22 18:39:57.797494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.037 [2024-07-22 18:39:57.797544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.037 [2024-07-22 18:39:57.803215] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.037 [2024-07-22 18:39:57.803534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.037 [2024-07-22 18:39:57.803596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:46.037 [2024-07-22 18:39:57.809169] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.037 [2024-07-22 18:39:57.809507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.038 [2024-07-22 18:39:57.809551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:46.038 [2024-07-22 18:39:57.815060] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.038 [2024-07-22 18:39:57.815362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.038 [2024-07-22 18:39:57.815410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:46.038 [2024-07-22 18:39:57.821042] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.038 [2024-07-22 18:39:57.821375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.038 [2024-07-22 18:39:57.821419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.038 [2024-07-22 18:39:57.827470] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.038 [2024-07-22 18:39:57.827749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.038 [2024-07-22 18:39:57.827803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:46.038 [2024-07-22 18:39:57.833702] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.038 [2024-07-22 18:39:57.834041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.038 [2024-07-22 18:39:57.834084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:46.038 [2024-07-22 18:39:57.840016] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.038 [2024-07-22 18:39:57.840345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.038 [2024-07-22 18:39:57.840395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:46.038 [2024-07-22 18:39:57.845992] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.038 [2024-07-22 18:39:57.846294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.038 [2024-07-22 18:39:57.846345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.038 [2024-07-22 18:39:57.851799] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.038 [2024-07-22 18:39:57.852136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.038 [2024-07-22 18:39:57.852191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:46.038 [2024-07-22 18:39:57.857882] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.038 [2024-07-22 18:39:57.858219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.038 [2024-07-22 18:39:57.858286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:46.038 [2024-07-22 18:39:57.863576] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.038 [2024-07-22 18:39:57.863908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.038 [2024-07-22 18:39:57.863951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:46.038 [2024-07-22 18:39:57.869745] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.038 [2024-07-22 18:39:57.870062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.038 [2024-07-22 18:39:57.870120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.038 [2024-07-22 18:39:57.875808] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.038 [2024-07-22 18:39:57.876135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.038 [2024-07-22 18:39:57.876184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:46.038 [2024-07-22 18:39:57.882042] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.038 [2024-07-22 18:39:57.882327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.038 [2024-07-22 18:39:57.882378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:46.038 [2024-07-22 18:39:57.888203] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.038 [2024-07-22 18:39:57.888518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.038 [2024-07-22 18:39:57.888563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:46.038 [2024-07-22 18:39:57.894677] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.038 [2024-07-22 18:39:57.895012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.038 [2024-07-22 18:39:57.895062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.038 [2024-07-22 18:39:57.900762] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.038 [2024-07-22 18:39:57.901097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.038 [2024-07-22 18:39:57.901140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:46.038 [2024-07-22 18:39:57.906779] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.038 [2024-07-22 18:39:57.907130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.038 [2024-07-22 18:39:57.907181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:46.038 [2024-07-22 18:39:57.913018] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.038 [2024-07-22 18:39:57.913342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.038 [2024-07-22 18:39:57.913407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:46.038 [2024-07-22 18:39:57.919486] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.038 [2024-07-22 18:39:57.919786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.038 [2024-07-22 18:39:57.919845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.038 [2024-07-22 18:39:57.925806] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.038 [2024-07-22 18:39:57.926166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.038 [2024-07-22 18:39:57.926213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:46.038 [2024-07-22 18:39:57.932045] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.038 [2024-07-22 18:39:57.932372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.038 [2024-07-22 18:39:57.932421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:46.038 [2024-07-22 18:39:57.937959] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.038 [2024-07-22 18:39:57.938278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.038 [2024-07-22 18:39:57.938324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:46.038 [2024-07-22 18:39:57.943845] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.038 [2024-07-22 18:39:57.944155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.038 [2024-07-22 18:39:57.944199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.038 [2024-07-22 18:39:57.949797] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.038 [2024-07-22 18:39:57.950158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.038 [2024-07-22 18:39:57.950209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:46.038 [2024-07-22 18:39:57.956131] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.038 [2024-07-22 18:39:57.956435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.038 [2024-07-22 18:39:57.956485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:46.038 [2024-07-22 18:39:57.962545] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.038 [2024-07-22 18:39:57.962903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.038 [2024-07-22 18:39:57.962954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:46.038 [2024-07-22 18:39:57.968965] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.038 [2024-07-22 18:39:57.969269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.038 [2024-07-22 18:39:57.969321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.038 [2024-07-22 18:39:57.975345] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.039 [2024-07-22 18:39:57.975637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.039 [2024-07-22 18:39:57.975683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:46.039 [2024-07-22 18:39:57.981655] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.039 [2024-07-22 18:39:57.981970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.039 [2024-07-22 18:39:57.982017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:46.039 [2024-07-22 18:39:57.987941] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.039 [2024-07-22 18:39:57.988252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.039 [2024-07-22 18:39:57.988300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:46.039 [2024-07-22 18:39:57.994064] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.039 [2024-07-22 18:39:57.994347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.039 [2024-07-22 18:39:57.994412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.039 [2024-07-22 18:39:58.000102] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.039 [2024-07-22 18:39:58.000409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.039 [2024-07-22 18:39:58.000452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:46.039 [2024-07-22 18:39:58.006330] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.039 [2024-07-22 18:39:58.006608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.039 [2024-07-22 18:39:58.006658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:46.039 [2024-07-22 18:39:58.012542] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.039 [2024-07-22 18:39:58.012835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.039 [2024-07-22 18:39:58.012888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:46.039 [2024-07-22 18:39:58.018731] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.039 [2024-07-22 18:39:58.019029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.039 [2024-07-22 18:39:58.019070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.039 [2024-07-22 18:39:58.025075] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.039 [2024-07-22 18:39:58.025358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.039 [2024-07-22 18:39:58.025408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:46.039 [2024-07-22 18:39:58.031312] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.039 [2024-07-22 18:39:58.031585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.039 [2024-07-22 18:39:58.031645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:46.039 [2024-07-22 18:39:58.037578] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.039 [2024-07-22 18:39:58.037876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.039 [2024-07-22 18:39:58.037935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:46.039 [2024-07-22 18:39:58.043798] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.039 [2024-07-22 18:39:58.044117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.039 [2024-07-22 18:39:58.044163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.039 [2024-07-22 18:39:58.050229] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.039 [2024-07-22 18:39:58.050507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.039 [2024-07-22 18:39:58.050556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:46.314 [2024-07-22 18:39:58.056428] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.314 [2024-07-22 18:39:58.056701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.314 [2024-07-22 18:39:58.056749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:46.315 [2024-07-22 18:39:58.062569] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.315 [2024-07-22 18:39:58.062901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.315 [2024-07-22 18:39:58.062947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:46.315 [2024-07-22 18:39:58.068504] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.315 [2024-07-22 18:39:58.068819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.315 [2024-07-22 18:39:58.068878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.315 [2024-07-22 18:39:58.074587] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.315 [2024-07-22 18:39:58.074903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.315 [2024-07-22 18:39:58.074938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:46.315 [2024-07-22 18:39:58.080660] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.315 [2024-07-22 18:39:58.080964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.315 [2024-07-22 18:39:58.081003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:46.315 [2024-07-22 18:39:58.086827] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.315 [2024-07-22 18:39:58.087139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.315 [2024-07-22 18:39:58.087190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:46.315 [2024-07-22 18:39:58.093018] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.315 [2024-07-22 18:39:58.093316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.315 [2024-07-22 18:39:58.093371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.315 [2024-07-22 18:39:58.099291] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.315 [2024-07-22 18:39:58.099609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.315 [2024-07-22 18:39:58.099657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:46.315 [2024-07-22 18:39:58.105494] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.315 [2024-07-22 18:39:58.105784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.315 [2024-07-22 18:39:58.105827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:46.315 [2024-07-22 18:39:58.111624] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.315 [2024-07-22 18:39:58.111921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.315 [2024-07-22 18:39:58.111976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:46.315 [2024-07-22 18:39:58.117722] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.315 [2024-07-22 18:39:58.118030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.315 [2024-07-22 18:39:58.118080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.315 [2024-07-22 18:39:58.123890] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.315 [2024-07-22 18:39:58.124191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.315 [2024-07-22 18:39:58.124236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:46.315 [2024-07-22 18:39:58.130162] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.315 [2024-07-22 18:39:58.130434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.315 [2024-07-22 18:39:58.130480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:46.315 [2024-07-22 18:39:58.136232] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.315 [2024-07-22 18:39:58.136519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.315 [2024-07-22 18:39:58.136559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:46.315 [2024-07-22 18:39:58.142433] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.315 [2024-07-22 18:39:58.142736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.315 [2024-07-22 18:39:58.142781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.315 [2024-07-22 18:39:58.148609] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.315 [2024-07-22 18:39:58.148935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.315 [2024-07-22 18:39:58.148974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:46.315 [2024-07-22 18:39:58.154906] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.315 [2024-07-22 18:39:58.155182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.315 [2024-07-22 18:39:58.155221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:46.315 [2024-07-22 18:39:58.161365] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.315 [2024-07-22 18:39:58.161652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.315 [2024-07-22 18:39:58.161690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:46.315 [2024-07-22 18:39:58.167744] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.315 [2024-07-22 18:39:58.168039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.315 [2024-07-22 18:39:58.168077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.315 [2024-07-22 18:39:58.174092] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.315 [2024-07-22 18:39:58.174366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.315 [2024-07-22 18:39:58.174414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:46.315 [2024-07-22 18:39:58.180157] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.316 [2024-07-22 18:39:58.180462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.316 [2024-07-22 18:39:58.180502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:46.316 [2024-07-22 18:39:58.186469] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.316 [2024-07-22 18:39:58.186765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.316 [2024-07-22 18:39:58.186804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:46.316 [2024-07-22 18:39:58.192571] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.316 [2024-07-22 18:39:58.192906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.316 [2024-07-22 18:39:58.192944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.316 [2024-07-22 18:39:58.198759] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.316 [2024-07-22 18:39:58.199064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.316 [2024-07-22 18:39:58.199119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:46.316 [2024-07-22 18:39:58.204941] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.316 [2024-07-22 18:39:58.205236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.316 [2024-07-22 18:39:58.205275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:46.316 [2024-07-22 18:39:58.211140] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.316 [2024-07-22 18:39:58.211431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.316 [2024-07-22 18:39:58.211472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:46.316 [2024-07-22 18:39:58.217369] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.316 [2024-07-22 18:39:58.217655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.316 [2024-07-22 18:39:58.217693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.316 [2024-07-22 18:39:58.223447] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.316 [2024-07-22 18:39:58.223728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.316 [2024-07-22 18:39:58.223766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:46.316 [2024-07-22 18:39:58.229589] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.316 [2024-07-22 18:39:58.229889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.316 [2024-07-22 18:39:58.229926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:46.316 [2024-07-22 18:39:58.235684] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.316 [2024-07-22 18:39:58.235978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.316 [2024-07-22 18:39:58.236011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:46.316 [2024-07-22 18:39:58.241871] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.316 [2024-07-22 18:39:58.242181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.316 [2024-07-22 18:39:58.242214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.316 [2024-07-22 18:39:58.247908] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.316 [2024-07-22 18:39:58.248168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.316 [2024-07-22 18:39:58.248205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:46.316 [2024-07-22 18:39:58.253760] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.316 [2024-07-22 18:39:58.254065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.316 [2024-07-22 18:39:58.254103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:46.316 [2024-07-22 18:39:58.259815] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.316 [2024-07-22 18:39:58.260098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.316 [2024-07-22 18:39:58.260134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:46.316 [2024-07-22 18:39:58.265869] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.316 [2024-07-22 18:39:58.266187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.316 [2024-07-22 18:39:58.266225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.316 [2024-07-22 18:39:58.271866] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.316 [2024-07-22 18:39:58.272122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.316 [2024-07-22 18:39:58.272158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:46.316 [2024-07-22 18:39:58.278308] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.316 [2024-07-22 18:39:58.278605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.316 [2024-07-22 18:39:58.278651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:46.316 [2024-07-22 18:39:58.284502] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.316 [2024-07-22 18:39:58.284779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.316 [2024-07-22 18:39:58.284817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:46.316 [2024-07-22 18:39:58.290817] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.316 [2024-07-22 18:39:58.291135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.316 [2024-07-22 18:39:58.291174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.316 [2024-07-22 18:39:58.297225] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.316 [2024-07-22 18:39:58.297513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.317 [2024-07-22 18:39:58.297553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:46.317 [2024-07-22 18:39:58.303414] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.317 [2024-07-22 18:39:58.303710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.317 [2024-07-22 18:39:58.303747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:46.317 [2024-07-22 18:39:58.309848] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.317 [2024-07-22 18:39:58.310194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.317 [2024-07-22 18:39:58.310232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:46.317 [2024-07-22 18:39:58.316191] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.317 [2024-07-22 18:39:58.316500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.317 [2024-07-22 18:39:58.316540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.620 [2024-07-22 18:39:58.322173] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.620 [2024-07-22 18:39:58.322442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.620 [2024-07-22 18:39:58.322481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:46.620 [2024-07-22 18:39:58.328216] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.620 [2024-07-22 18:39:58.328509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.620 [2024-07-22 18:39:58.328546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:46.620 [2024-07-22 18:39:58.334282] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.620 [2024-07-22 18:39:58.334560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.620 [2024-07-22 18:39:58.334613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:46.620 [2024-07-22 18:39:58.340246] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.620 [2024-07-22 18:39:58.340511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.620 [2024-07-22 18:39:58.340549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.620 [2024-07-22 18:39:58.346110] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.620 [2024-07-22 18:39:58.346392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.620 [2024-07-22 18:39:58.346431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:46.620 [2024-07-22 18:39:58.351991] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.620 [2024-07-22 18:39:58.352264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.620 [2024-07-22 18:39:58.352300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:46.620 [2024-07-22 18:39:58.358113] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.620 [2024-07-22 18:39:58.358385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.621 [2024-07-22 18:39:58.358423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:46.621 [2024-07-22 18:39:58.364118] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.621 [2024-07-22 18:39:58.364387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.621 [2024-07-22 18:39:58.364424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.621 [2024-07-22 18:39:58.369790] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.621 [2024-07-22 18:39:58.370127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.621 [2024-07-22 18:39:58.370160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:46.621 [2024-07-22 18:39:58.375711] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.621 [2024-07-22 18:39:58.375994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.621 [2024-07-22 18:39:58.376041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:46.621 [2024-07-22 18:39:58.381628] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.621 [2024-07-22 18:39:58.381927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.621 [2024-07-22 18:39:58.381964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:46.621 [2024-07-22 18:39:58.387896] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.621 [2024-07-22 18:39:58.388217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.621 [2024-07-22 18:39:58.388255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.621 [2024-07-22 18:39:58.394253] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.621 [2024-07-22 18:39:58.394532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.621 [2024-07-22 18:39:58.394572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:46.621 [2024-07-22 18:39:58.400516] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.621 [2024-07-22 18:39:58.400822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.621 [2024-07-22 18:39:58.400870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:46.621 [2024-07-22 18:39:58.406613] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.621 [2024-07-22 18:39:58.406911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.621 [2024-07-22 18:39:58.406942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:46.621 [2024-07-22 18:39:58.412435] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.621 [2024-07-22 18:39:58.412746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.621 [2024-07-22 18:39:58.412794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.621 [2024-07-22 18:39:58.418319] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.621 [2024-07-22 18:39:58.418611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.621 [2024-07-22 18:39:58.418646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:46.621 [2024-07-22 18:39:58.424136] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.621 [2024-07-22 18:39:58.424401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.621 [2024-07-22 18:39:58.424437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:46.621 [2024-07-22 18:39:58.429757] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.621 [2024-07-22 18:39:58.430088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.621 [2024-07-22 18:39:58.430125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:46.621 [2024-07-22 18:39:58.435578] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.621 [2024-07-22 18:39:58.435848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.621 [2024-07-22 18:39:58.435908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.621 [2024-07-22 18:39:58.441729] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.621 [2024-07-22 18:39:58.442050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.621 [2024-07-22 18:39:58.442090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:46.621 [2024-07-22 18:39:58.447910] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.621 [2024-07-22 18:39:58.448187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.621 [2024-07-22 18:39:58.448219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:46.621 [2024-07-22 18:39:58.453975] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.621 [2024-07-22 18:39:58.454259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.621 [2024-07-22 18:39:58.454300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:46.621 [2024-07-22 18:39:58.460128] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.621 [2024-07-22 18:39:58.460413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.621 [2024-07-22 18:39:58.460452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.621 [2024-07-22 18:39:58.466057] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.621 [2024-07-22 18:39:58.466324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.621 [2024-07-22 18:39:58.466376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:46.621 [2024-07-22 18:39:58.471826] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.621 [2024-07-22 18:39:58.472122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.621 [2024-07-22 18:39:58.472159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:46.621 [2024-07-22 18:39:58.477716] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.621 [2024-07-22 18:39:58.477993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.621 [2024-07-22 18:39:58.478056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:46.621 [2024-07-22 18:39:58.483587] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.621 [2024-07-22 18:39:58.483864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.621 [2024-07-22 18:39:58.483913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.621 [2024-07-22 18:39:58.489965] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.621 [2024-07-22 18:39:58.490277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.621 [2024-07-22 18:39:58.490315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:46.621 [2024-07-22 18:39:58.496120] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.621 [2024-07-22 18:39:58.496411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.621 [2024-07-22 18:39:58.496449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:46.621 [2024-07-22 18:39:58.502083] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.621 [2024-07-22 18:39:58.502366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.621 [2024-07-22 18:39:58.502403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:46.621 [2024-07-22 18:39:58.508009] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.621 [2024-07-22 18:39:58.508298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.621 [2024-07-22 18:39:58.508334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.621 [2024-07-22 18:39:58.513830] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.621 [2024-07-22 18:39:58.514169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.621 [2024-07-22 18:39:58.514207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:46.621 [2024-07-22 18:39:58.519774] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.622 [2024-07-22 18:39:58.520077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.622 [2024-07-22 18:39:58.520114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:46.622 [2024-07-22 18:39:58.525717] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.622 [2024-07-22 18:39:58.526044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.622 [2024-07-22 18:39:58.526081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:46.622 [2024-07-22 18:39:58.531871] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.622 [2024-07-22 18:39:58.532139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.622 [2024-07-22 18:39:58.532187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.622 [2024-07-22 18:39:58.538080] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.622 [2024-07-22 18:39:58.538389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.622 [2024-07-22 18:39:58.538427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:46.622 [2024-07-22 18:39:58.544416] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.622 [2024-07-22 18:39:58.544688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.622 [2024-07-22 18:39:58.544728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:46.622 [2024-07-22 18:39:58.550667] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.622 [2024-07-22 18:39:58.550965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.622 [2024-07-22 18:39:58.551003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:46.622 [2024-07-22 18:39:58.556876] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.622 [2024-07-22 18:39:58.557164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.622 [2024-07-22 18:39:58.557201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.622 [2024-07-22 18:39:58.562933] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.622 [2024-07-22 18:39:58.563210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.622 [2024-07-22 18:39:58.563248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:46.622 [2024-07-22 18:39:58.568846] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.622 [2024-07-22 18:39:58.569144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.622 [2024-07-22 18:39:58.569181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:46.622 [2024-07-22 18:39:58.574920] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.622 [2024-07-22 18:39:58.575197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.622 [2024-07-22 18:39:58.575234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:46.622 [2024-07-22 18:39:58.581030] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.622 [2024-07-22 18:39:58.581302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.622 [2024-07-22 18:39:58.581340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.622 [2024-07-22 18:39:58.587264] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.622 [2024-07-22 18:39:58.587552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.622 [2024-07-22 18:39:58.587590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:46.622 [2024-07-22 18:39:58.593544] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.622 [2024-07-22 18:39:58.593826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.622 [2024-07-22 18:39:58.593875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:46.622 [2024-07-22 18:39:58.599815] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.622 [2024-07-22 18:39:58.600108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.622 [2024-07-22 18:39:58.600145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:46.622 [2024-07-22 18:39:58.605686] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.622 [2024-07-22 18:39:58.605975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.622 [2024-07-22 18:39:58.606036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.622 [2024-07-22 18:39:58.611686] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.622 [2024-07-22 18:39:58.611979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.622 [2024-07-22 18:39:58.612011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:46.622 [2024-07-22 18:39:58.617834] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.622 [2024-07-22 18:39:58.618157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.622 [2024-07-22 18:39:58.618193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:46.622 [2024-07-22 18:39:58.623922] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.622 [2024-07-22 18:39:58.624208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.622 [2024-07-22 18:39:58.624239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:46.622 [2024-07-22 18:39:58.629945] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.622 [2024-07-22 18:39:58.630262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.622 [2024-07-22 18:39:58.630301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.882 [2024-07-22 18:39:58.636038] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.882 [2024-07-22 18:39:58.636353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.882 [2024-07-22 18:39:58.636401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:46.882 [2024-07-22 18:39:58.642189] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.882 [2024-07-22 18:39:58.642460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.882 [2024-07-22 18:39:58.642499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:46.882 [2024-07-22 18:39:58.648477] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.882 [2024-07-22 18:39:58.648751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.882 [2024-07-22 18:39:58.648796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:46.882 [2024-07-22 18:39:58.654680] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.882 [2024-07-22 18:39:58.654988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.882 [2024-07-22 18:39:58.655028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.882 [2024-07-22 18:39:58.660827] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.882 [2024-07-22 18:39:58.661133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.882 [2024-07-22 18:39:58.661172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:46.882 [2024-07-22 18:39:58.666964] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.882 [2024-07-22 18:39:58.667252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.882 [2024-07-22 18:39:58.667290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:46.882 [2024-07-22 18:39:58.673168] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.882 [2024-07-22 18:39:58.673436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.882 [2024-07-22 18:39:58.673473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:46.882 [2024-07-22 18:39:58.679339] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.882 [2024-07-22 18:39:58.679613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.882 [2024-07-22 18:39:58.679652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.882 [2024-07-22 18:39:58.685645] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.882 [2024-07-22 18:39:58.685954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.882 [2024-07-22 18:39:58.685993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:46.882 [2024-07-22 18:39:58.692063] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.882 [2024-07-22 18:39:58.692337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.882 [2024-07-22 18:39:58.692376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:46.882 [2024-07-22 18:39:58.698391] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.882 [2024-07-22 18:39:58.698665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.882 [2024-07-22 18:39:58.698704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:46.882 [2024-07-22 18:39:58.704402] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.882 [2024-07-22 18:39:58.704688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.882 [2024-07-22 18:39:58.704727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.882 [2024-07-22 18:39:58.710765] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.882 [2024-07-22 18:39:58.711079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.882 [2024-07-22 18:39:58.711118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:46.882 [2024-07-22 18:39:58.717021] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.882 [2024-07-22 18:39:58.717295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.882 [2024-07-22 18:39:58.717327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:46.882 [2024-07-22 18:39:58.723266] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.882 [2024-07-22 18:39:58.723536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.882 [2024-07-22 18:39:58.723576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:46.882 [2024-07-22 18:39:58.729568] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.882 [2024-07-22 18:39:58.729844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.882 [2024-07-22 18:39:58.729892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.882 [2024-07-22 18:39:58.735824] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.882 [2024-07-22 18:39:58.736120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.882 [2024-07-22 18:39:58.736156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:46.882 [2024-07-22 18:39:58.742187] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.882 [2024-07-22 18:39:58.742467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.882 [2024-07-22 18:39:58.742506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:46.882 [2024-07-22 18:39:58.748128] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.882 [2024-07-22 18:39:58.748388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.882 [2024-07-22 18:39:58.748426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:46.882 [2024-07-22 18:39:58.754145] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.882 [2024-07-22 18:39:58.754412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.883 [2024-07-22 18:39:58.754451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.883 [2024-07-22 18:39:58.760134] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.883 [2024-07-22 18:39:58.760420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.883 [2024-07-22 18:39:58.760458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:46.883 [2024-07-22 18:39:58.766084] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.883 [2024-07-22 18:39:58.766357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.883 [2024-07-22 18:39:58.766394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:46.883 [2024-07-22 18:39:58.772022] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.883 [2024-07-22 18:39:58.772310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.883 [2024-07-22 18:39:58.772347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:46.883 [2024-07-22 18:39:58.778074] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.883 [2024-07-22 18:39:58.778354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.883 [2024-07-22 18:39:58.778392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.883 [2024-07-22 18:39:58.784156] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.883 [2024-07-22 18:39:58.784445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.883 [2024-07-22 18:39:58.784484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:46.883 [2024-07-22 18:39:58.790215] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.883 [2024-07-22 18:39:58.790497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.883 [2024-07-22 18:39:58.790535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:46.883 [2024-07-22 18:39:58.796267] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.883 [2024-07-22 18:39:58.796542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.883 [2024-07-22 18:39:58.796579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:46.883 [2024-07-22 18:39:58.802243] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.883 [2024-07-22 18:39:58.802513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.883 [2024-07-22 18:39:58.802551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.883 [2024-07-22 18:39:58.808134] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.883 [2024-07-22 18:39:58.808430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.883 [2024-07-22 18:39:58.808468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:46.883 [2024-07-22 18:39:58.813998] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.883 [2024-07-22 18:39:58.814293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.883 [2024-07-22 18:39:58.814331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:46.883 [2024-07-22 18:39:58.819977] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.883 [2024-07-22 18:39:58.820257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.883 [2024-07-22 18:39:58.820295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:46.883 [2024-07-22 18:39:58.825899] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.883 [2024-07-22 18:39:58.826183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.883 [2024-07-22 18:39:58.826220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.883 [2024-07-22 18:39:58.831721] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.883 [2024-07-22 18:39:58.832016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.883 [2024-07-22 18:39:58.832053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:46.883 [2024-07-22 18:39:58.837715] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.883 [2024-07-22 18:39:58.838020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.883 [2024-07-22 18:39:58.838073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:46.883 [2024-07-22 18:39:58.843644] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.883 [2024-07-22 18:39:58.843941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.883 [2024-07-22 18:39:58.843978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:46.883 [2024-07-22 18:39:58.849628] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.883 [2024-07-22 18:39:58.849935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.883 [2024-07-22 18:39:58.849973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.883 [2024-07-22 18:39:58.855712] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.883 [2024-07-22 18:39:58.856013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.883 [2024-07-22 18:39:58.856050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:46.883 [2024-07-22 18:39:58.861734] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.883 [2024-07-22 18:39:58.862057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.883 [2024-07-22 18:39:58.862094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:46.883 [2024-07-22 18:39:58.867823] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.883 [2024-07-22 18:39:58.868106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.883 [2024-07-22 18:39:58.868142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:46.883 [2024-07-22 18:39:58.873721] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.883 [2024-07-22 18:39:58.874017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.883 [2024-07-22 18:39:58.874071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.883 [2024-07-22 18:39:58.879868] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.883 [2024-07-22 18:39:58.880142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.883 [2024-07-22 18:39:58.880178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:46.883 [2024-07-22 18:39:58.885698] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.883 [2024-07-22 18:39:58.885989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.883 [2024-07-22 18:39:58.886053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:46.883 [2024-07-22 18:39:58.891531] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.883 [2024-07-22 18:39:58.891805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.883 [2024-07-22 18:39:58.891854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:46.883 [2024-07-22 18:39:58.897407] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:46.883 [2024-07-22 18:39:58.897680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.143 [2024-07-22 18:39:58.897717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:47.143 [2024-07-22 18:39:58.903381] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:47.143 [2024-07-22 18:39:58.903662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.143 [2024-07-22 18:39:58.903709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:47.143 [2024-07-22 18:39:58.909210] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:47.143 [2024-07-22 18:39:58.909501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.143 [2024-07-22 18:39:58.909537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:47.143 [2024-07-22 18:39:58.915092] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:47.143 [2024-07-22 18:39:58.915350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.143 [2024-07-22 18:39:58.915386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:47.143 [2024-07-22 18:39:58.920746] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:47.143 [2024-07-22 18:39:58.921032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.143 [2024-07-22 18:39:58.921069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:47.143 [2024-07-22 18:39:58.926575] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:47.143 [2024-07-22 18:39:58.926860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.143 [2024-07-22 18:39:58.926904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:47.143 [2024-07-22 18:39:58.932339] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:47.143 [2024-07-22 18:39:58.932610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.143 [2024-07-22 18:39:58.932686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:47.143 [2024-07-22 18:39:58.938194] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:47.143 [2024-07-22 18:39:58.938475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.143 [2024-07-22 18:39:58.938513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:47.143 [2024-07-22 18:39:58.943966] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:47.143 [2024-07-22 18:39:58.944239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.143 [2024-07-22 18:39:58.944276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:47.143 [2024-07-22 18:39:58.949750] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:47.143 [2024-07-22 18:39:58.950058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.143 [2024-07-22 18:39:58.950095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:47.143 [2024-07-22 18:39:58.955796] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:47.143 [2024-07-22 18:39:58.956115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.143 [2024-07-22 18:39:58.956152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:47.143 [2024-07-22 18:39:58.961750] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:47.143 [2024-07-22 18:39:58.962073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.143 [2024-07-22 18:39:58.962112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:47.143 [2024-07-22 18:39:58.967521] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:47.143 [2024-07-22 18:39:58.967799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.143 [2024-07-22 18:39:58.967850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:47.143 [2024-07-22 18:39:58.973394] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:47.143 [2024-07-22 18:39:58.973671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.143 [2024-07-22 18:39:58.973709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:47.143 [2024-07-22 18:39:58.979278] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:47.143 [2024-07-22 18:39:58.979551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.143 [2024-07-22 18:39:58.979589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:47.143 [2024-07-22 18:39:58.985091] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:47.143 [2024-07-22 18:39:58.985362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.143 [2024-07-22 18:39:58.985399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:47.143 [2024-07-22 18:39:58.990820] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:47.143 [2024-07-22 18:39:58.991105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.143 [2024-07-22 18:39:58.991143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:47.143 [2024-07-22 18:39:58.996502] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:47.144 [2024-07-22 18:39:58.996773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.144 [2024-07-22 18:39:58.996810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:47.144 [2024-07-22 18:39:59.002313] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:47.144 [2024-07-22 18:39:59.002591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.144 [2024-07-22 18:39:59.002638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:47.144 [2024-07-22 18:39:59.008060] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:47.144 [2024-07-22 18:39:59.008334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.144 [2024-07-22 18:39:59.008398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:47.144 [2024-07-22 18:39:59.013938] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:47.144 [2024-07-22 18:39:59.014252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.144 [2024-07-22 18:39:59.014290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:47.144 [2024-07-22 18:39:59.019780] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:47.144 [2024-07-22 18:39:59.020075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.144 [2024-07-22 18:39:59.020113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:47.144 [2024-07-22 18:39:59.025833] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:47.144 [2024-07-22 18:39:59.026164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.144 [2024-07-22 18:39:59.026201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:47.144 [2024-07-22 18:39:59.031939] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:47.144 [2024-07-22 18:39:59.032218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.144 [2024-07-22 18:39:59.032270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:47.144 [2024-07-22 18:39:59.038174] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:47.144 [2024-07-22 18:39:59.038448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.144 [2024-07-22 18:39:59.038486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:47.144 [2024-07-22 18:39:59.044384] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:47.144 [2024-07-22 18:39:59.044685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.144 [2024-07-22 18:39:59.044723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:47.144 [2024-07-22 18:39:59.050629] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:47.144 [2024-07-22 18:39:59.050914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.144 [2024-07-22 18:39:59.050967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:47.144 [2024-07-22 18:39:59.056726] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:47.144 [2024-07-22 18:39:59.057030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.144 [2024-07-22 18:39:59.057067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:47.144 [2024-07-22 18:39:59.063020] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:47.144 [2024-07-22 18:39:59.063310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.144 [2024-07-22 18:39:59.063362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:47.144 [2024-07-22 18:39:59.069056] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:47.144 [2024-07-22 18:39:59.069326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.144 [2024-07-22 18:39:59.069362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:47.144 [2024-07-22 18:39:59.074753] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:47.144 [2024-07-22 18:39:59.075033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.144 [2024-07-22 18:39:59.075069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:47.144 [2024-07-22 18:39:59.080608] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:47.144 [2024-07-22 18:39:59.080890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.144 [2024-07-22 18:39:59.080920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:47.144 [2024-07-22 18:39:59.086355] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:47.144 [2024-07-22 18:39:59.086642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.144 [2024-07-22 18:39:59.086679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:47.144 [2024-07-22 18:39:59.092086] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:47.144 [2024-07-22 18:39:59.092348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.144 [2024-07-22 18:39:59.092385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:47.144 [2024-07-22 18:39:59.097736] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:47.144 [2024-07-22 18:39:59.098025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.144 [2024-07-22 18:39:59.098061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:47.144 [2024-07-22 18:39:59.103404] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:47.144 [2024-07-22 18:39:59.103671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.144 [2024-07-22 18:39:59.103707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:47.144 [2024-07-22 18:39:59.109086] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:47.144 [2024-07-22 18:39:59.109346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.144 [2024-07-22 18:39:59.109383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:47.144 [2024-07-22 18:39:59.114744] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:47.144 [2024-07-22 18:39:59.115031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.144 [2024-07-22 18:39:59.115068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:47.144 [2024-07-22 18:39:59.120593] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:47.144 [2024-07-22 18:39:59.120887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.144 [2024-07-22 18:39:59.120933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:47.144 [2024-07-22 18:39:59.126698] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:47.144 [2024-07-22 18:39:59.127009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.144 [2024-07-22 18:39:59.127042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:47.144 [2024-07-22 18:39:59.132992] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:47.144 [2024-07-22 18:39:59.133271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.144 [2024-07-22 18:39:59.133309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:47.144 [2024-07-22 18:39:59.139059] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:47.144 [2024-07-22 18:39:59.139330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.144 [2024-07-22 18:39:59.139370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:47.144 [2024-07-22 18:39:59.145109] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:47.144 [2024-07-22 18:39:59.145392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.144 [2024-07-22 18:39:59.145432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:47.144 [2024-07-22 18:39:59.151195] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:47.144 [2024-07-22 18:39:59.151468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.145 [2024-07-22 18:39:59.151507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:47.145 [2024-07-22 18:39:59.157223] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:47.145 [2024-07-22 18:39:59.157507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.145 [2024-07-22 18:39:59.157545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:47.404 [2024-07-22 18:39:59.163341] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:47.404 [2024-07-22 18:39:59.163612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.404 [2024-07-22 18:39:59.163658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:47.404 [2024-07-22 18:39:59.169599] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:47.404 [2024-07-22 18:39:59.169905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.404 [2024-07-22 18:39:59.169944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:47.404 [2024-07-22 18:39:59.175626] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:47.404 [2024-07-22 18:39:59.175937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.404 [2024-07-22 18:39:59.175976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:47.404 [2024-07-22 18:39:59.181677] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:47.404 [2024-07-22 18:39:59.181980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.404 [2024-07-22 18:39:59.182036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:47.404 [2024-07-22 18:39:59.187730] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:47.404 [2024-07-22 18:39:59.188043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.404 [2024-07-22 18:39:59.188081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:47.404 [2024-07-22 18:39:59.193707] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:47.404 [2024-07-22 18:39:59.194024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.404 [2024-07-22 18:39:59.194062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:47.404 [2024-07-22 18:39:59.199850] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:47.404 [2024-07-22 18:39:59.200135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.404 [2024-07-22 18:39:59.200187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:47.404 [2024-07-22 18:39:59.205749] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:47.404 [2024-07-22 18:39:59.206082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.404 [2024-07-22 18:39:59.206121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:47.404 [2024-07-22 18:39:59.211667] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:47.404 [2024-07-22 18:39:59.211961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.404 [2024-07-22 18:39:59.211998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:47.404 [2024-07-22 18:39:59.217610] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:47.404 [2024-07-22 18:39:59.217926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.404 [2024-07-22 18:39:59.217958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:47.404 [2024-07-22 18:39:59.223515] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:47.404 [2024-07-22 18:39:59.223814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.404 [2024-07-22 18:39:59.223846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:47.404 [2024-07-22 18:39:59.229435] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:47.404 [2024-07-22 18:39:59.229716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.404 [2024-07-22 18:39:59.229754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:47.404 [2024-07-22 18:39:59.235363] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:47.404 [2024-07-22 18:39:59.235639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.404 [2024-07-22 18:39:59.235677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:47.404 [2024-07-22 18:39:59.241446] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:47.404 [2024-07-22 18:39:59.241718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.404 [2024-07-22 18:39:59.241756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:47.404 [2024-07-22 18:39:59.247472] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:47.404 [2024-07-22 18:39:59.247755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.404 [2024-07-22 18:39:59.247793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:47.404 [2024-07-22 18:39:59.253366] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:47.404 [2024-07-22 18:39:59.253643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.404 [2024-07-22 18:39:59.253681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:47.404 [2024-07-22 18:39:59.259316] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:47.404 [2024-07-22 18:39:59.259594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.404 [2024-07-22 18:39:59.259631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:47.404 [2024-07-22 18:39:59.265208] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:47.404 [2024-07-22 18:39:59.265485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.405 [2024-07-22 18:39:59.265522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:47.405 [2024-07-22 18:39:59.271199] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:47.405 [2024-07-22 18:39:59.271475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.405 [2024-07-22 18:39:59.271513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:47.405 [2024-07-22 18:39:59.277124] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:47.405 [2024-07-22 18:39:59.277405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.405 [2024-07-22 18:39:59.277442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:47.405 [2024-07-22 18:39:59.283160] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:47.405 [2024-07-22 18:39:59.283437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.405 [2024-07-22 18:39:59.283475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:47.405 [2024-07-22 18:39:59.289079] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:47.405 [2024-07-22 18:39:59.289363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.405 [2024-07-22 18:39:59.289401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:47.405 [2024-07-22 18:39:59.295121] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:47.405 [2024-07-22 18:39:59.295392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.405 [2024-07-22 18:39:59.295431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:47.405 [2024-07-22 18:39:59.301165] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:47.405 [2024-07-22 18:39:59.301449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.405 [2024-07-22 18:39:59.301489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:47.405 [2024-07-22 18:39:59.307226] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:47.405 [2024-07-22 18:39:59.307524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.405 [2024-07-22 18:39:59.307574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:47.405 [2024-07-22 18:39:59.313455] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:47.405 [2024-07-22 18:39:59.313744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.405 [2024-07-22 18:39:59.313783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:47.405 [2024-07-22 18:39:59.319679] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:47.405 [2024-07-22 18:39:59.319977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.405 [2024-07-22 18:39:59.320014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:47.405 [2024-07-22 18:39:59.325977] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:47.405 [2024-07-22 18:39:59.326295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.405 [2024-07-22 18:39:59.326333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:47.405 [2024-07-22 18:39:59.332179] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:47.405 [2024-07-22 18:39:59.332479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.405 [2024-07-22 18:39:59.332517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:47.405 [2024-07-22 18:39:59.338384] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:47.405 [2024-07-22 18:39:59.338685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.405 [2024-07-22 18:39:59.338723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:47.405 [2024-07-22 18:39:59.344375] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:47.405 [2024-07-22 18:39:59.344662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.405 [2024-07-22 18:39:59.344701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:47.405 [2024-07-22 18:39:59.350451] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:47.405 [2024-07-22 18:39:59.350748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.405 [2024-07-22 18:39:59.350787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:47.405 [2024-07-22 18:39:59.356489] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:47.405 [2024-07-22 18:39:59.356766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.405 [2024-07-22 18:39:59.356804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:47.405 [2024-07-22 18:39:59.362513] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:47.405 [2024-07-22 18:39:59.362808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.405 [2024-07-22 18:39:59.362856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:47.405 [2024-07-22 18:39:59.368346] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:47.405 [2024-07-22 18:39:59.368623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.405 [2024-07-22 18:39:59.368661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:47.405 [2024-07-22 18:39:59.374360] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:47.405 [2024-07-22 18:39:59.374653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.405 [2024-07-22 18:39:59.374691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:47.405 [2024-07-22 18:39:59.380238] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:47.405 [2024-07-22 18:39:59.380522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.405 [2024-07-22 18:39:59.380554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:47.405 [2024-07-22 18:39:59.386217] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:47.405 [2024-07-22 18:39:59.386515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.405 [2024-07-22 18:39:59.386553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:47.405 [2024-07-22 18:39:59.392282] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:47.405 [2024-07-22 18:39:59.392565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.405 [2024-07-22 18:39:59.392604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:47.405 [2024-07-22 18:39:59.398470] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:47.405 [2024-07-22 18:39:59.398754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.405 [2024-07-22 18:39:59.398793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:47.405 [2024-07-22 18:39:59.404765] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:47.405 [2024-07-22 18:39:59.405086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.405 [2024-07-22 18:39:59.405125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:47.405 [2024-07-22 18:39:59.411054] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:47.405 [2024-07-22 18:39:59.411336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.405 [2024-07-22 18:39:59.411374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:47.405 [2024-07-22 18:39:59.417098] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:47.405 [2024-07-22 18:39:59.417375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.405 [2024-07-22 18:39:59.417413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:47.665 [2024-07-22 18:39:59.423163] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:47.665 [2024-07-22 18:39:59.423429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.665 [2024-07-22 18:39:59.423466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:47.665 [2024-07-22 18:39:59.429245] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:47.665 [2024-07-22 18:39:59.429528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.665 [2024-07-22 18:39:59.429567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:47.665 [2024-07-22 18:39:59.435319] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:47.665 [2024-07-22 18:39:59.435622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.665 [2024-07-22 18:39:59.435661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:47.665 [2024-07-22 18:39:59.441423] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:47.665 [2024-07-22 18:39:59.441710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.665 [2024-07-22 18:39:59.441750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:47.665 [2024-07-22 18:39:59.447472] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:47.665 [2024-07-22 18:39:59.447754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.665 [2024-07-22 18:39:59.447793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:47.665 [2024-07-22 18:39:59.453458] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:47.665 [2024-07-22 18:39:59.453753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.665 [2024-07-22 18:39:59.453791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:47.665 [2024-07-22 18:39:59.459493] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:47.665 [2024-07-22 18:39:59.459769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.665 [2024-07-22 18:39:59.459807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:47.665 [2024-07-22 18:39:59.465524] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:47.665 [2024-07-22 18:39:59.465797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.665 [2024-07-22 18:39:59.465860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:47.665 [2024-07-22 18:39:59.471560] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:47.665 [2024-07-22 18:39:59.471837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.665 [2024-07-22 18:39:59.471888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:47.665 [2024-07-22 18:39:59.477668] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:47.665 [2024-07-22 18:39:59.477958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.665 [2024-07-22 18:39:59.477996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:47.665 [2024-07-22 18:39:59.483804] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:47.665 [2024-07-22 18:39:59.484109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.666 [2024-07-22 18:39:59.484148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:47.666 [2024-07-22 18:39:59.490051] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:47.666 [2024-07-22 18:39:59.490326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.666 [2024-07-22 18:39:59.490364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:47.666 [2024-07-22 18:39:59.496117] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:47.666 [2024-07-22 18:39:59.496388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.666 [2024-07-22 18:39:59.496427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:47.666 [2024-07-22 18:39:59.502140] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:47.666 [2024-07-22 18:39:59.502415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.666 [2024-07-22 18:39:59.502455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:47.666 [2024-07-22 18:39:59.508186] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:47.666 [2024-07-22 18:39:59.508463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.666 [2024-07-22 18:39:59.508502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:47.666 [2024-07-22 18:39:59.514293] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:47.666 [2024-07-22 18:39:59.514569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.666 [2024-07-22 18:39:59.514608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:47.666 [2024-07-22 18:39:59.520465] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:47.666 [2024-07-22 18:39:59.520749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.666 [2024-07-22 18:39:59.520787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:47.666 [2024-07-22 18:39:59.526766] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:47.666 [2024-07-22 18:39:59.527066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.666 [2024-07-22 18:39:59.527104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:47.666 [2024-07-22 18:39:59.532898] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:47.666 [2024-07-22 18:39:59.533178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.666 [2024-07-22 18:39:59.533216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:47.666 [2024-07-22 18:39:59.539057] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:47.666 [2024-07-22 18:39:59.539344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.666 [2024-07-22 18:39:59.539383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:47.666 [2024-07-22 18:39:59.545224] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:47.666 [2024-07-22 18:39:59.545502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.666 [2024-07-22 18:39:59.545538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:47.666 [2024-07-22 18:39:59.551335] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:47.666 [2024-07-22 18:39:59.551605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.666 [2024-07-22 18:39:59.551641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:47.666 [2024-07-22 18:39:59.557274] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:47.666 [2024-07-22 18:39:59.557545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.666 [2024-07-22 18:39:59.557581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:47.666 [2024-07-22 18:39:59.563227] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:47.666 [2024-07-22 18:39:59.563513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.666 [2024-07-22 18:39:59.563549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:47.666 [2024-07-22 18:39:59.569059] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:47.666 [2024-07-22 18:39:59.569335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.666 [2024-07-22 18:39:59.569371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:47.666 [2024-07-22 18:39:59.575138] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:47.666 [2024-07-22 18:39:59.575403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.666 [2024-07-22 18:39:59.575440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:47.666 [2024-07-22 18:39:59.581245] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:47.666 [2024-07-22 18:39:59.581526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.666 [2024-07-22 18:39:59.581564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:47.666 [2024-07-22 18:39:59.587196] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:47.666 [2024-07-22 18:39:59.587501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.666 [2024-07-22 18:39:59.587538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:47.666 [2024-07-22 18:39:59.593030] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:47.666 [2024-07-22 18:39:59.593306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.666 [2024-07-22 18:39:59.593343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:47.666 [2024-07-22 18:39:59.598597] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:47.666 [2024-07-22 18:39:59.598704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.666 [2024-07-22 18:39:59.598734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:47.666 00:32:47.666 Latency(us) 00:32:47.666 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:47.666 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:32:47.666 nvme0n1 : 2.00 5094.06 636.76 0.00 0.00 3132.24 2383.13 10545.34 00:32:47.666 =================================================================================================================== 00:32:47.666 Total : 5094.06 636.76 0.00 0.00 3132.24 2383.13 10545.34 00:32:47.666 0 00:32:47.666 18:39:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:32:47.666 18:39:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:32:47.666 18:39:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:32:47.666 | .driver_specific 00:32:47.666 | .nvme_error 00:32:47.666 | .status_code 00:32:47.666 | .command_transient_transport_error' 00:32:47.666 18:39:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:32:47.924 18:39:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 329 > 0 )) 00:32:47.924 18:39:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 105207 00:32:47.924 18:39:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 105207 ']' 00:32:47.924 18:39:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 105207 00:32:47.924 18:39:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:32:48.184 18:39:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:48.184 18:39:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 105207 00:32:48.184 18:39:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:32:48.184 18:39:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:32:48.184 killing process with pid 105207 00:32:48.184 18:39:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 105207' 00:32:48.184 Received shutdown signal, test time was about 2.000000 seconds 00:32:48.184 00:32:48.184 Latency(us) 00:32:48.184 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:48.184 =================================================================================================================== 00:32:48.184 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:48.184 18:39:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 105207 00:32:48.184 18:39:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 105207 00:32:49.555 18:40:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 104867 00:32:49.555 18:40:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 104867 ']' 00:32:49.555 18:40:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 104867 00:32:49.555 18:40:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:32:49.555 18:40:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:49.555 18:40:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 104867 00:32:49.555 18:40:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:49.555 18:40:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:49.555 killing process with pid 104867 00:32:49.555 18:40:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 104867' 00:32:49.555 18:40:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 104867 00:32:49.555 18:40:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 104867 00:32:50.930 ************************************ 00:32:50.930 END TEST nvmf_digest_error 00:32:50.930 ************************************ 00:32:50.930 00:32:50.930 real 0m23.700s 00:32:50.930 user 0m44.102s 00:32:50.930 sys 0m5.215s 00:32:50.930 18:40:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:50.930 18:40:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:50.930 18:40:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:32:50.930 18:40:02 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:32:50.930 18:40:02 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:32:50.930 18:40:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:50.930 18:40:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:32:50.930 18:40:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:50.930 18:40:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:32:50.930 18:40:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:50.930 18:40:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:50.930 rmmod nvme_tcp 00:32:50.930 rmmod nvme_fabrics 00:32:50.930 rmmod nvme_keyring 00:32:50.930 18:40:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:50.930 18:40:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:32:50.930 18:40:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:32:50.930 18:40:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 104867 ']' 00:32:50.931 18:40:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 104867 00:32:50.931 18:40:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 104867 ']' 00:32:50.931 18:40:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 104867 00:32:50.931 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (104867) - No such process 00:32:50.931 Process with pid 104867 is not found 00:32:50.931 18:40:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 104867 is not found' 00:32:50.931 18:40:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:50.931 18:40:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:50.931 18:40:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:50.931 18:40:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:50.931 18:40:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:50.931 18:40:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:50.931 18:40:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:50.931 18:40:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:50.931 18:40:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:32:50.931 00:32:50.931 real 0m49.941s 00:32:50.931 user 1m31.993s 00:32:50.931 sys 0m10.705s 00:32:50.931 18:40:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:50.931 18:40:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:32:50.931 ************************************ 00:32:50.931 END TEST nvmf_digest 00:32:50.931 ************************************ 00:32:50.931 18:40:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:32:50.931 18:40:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 1 -eq 1 ]] 00:32:50.931 18:40:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ tcp == \t\c\p ]] 00:32:50.931 18:40:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@38 -- # run_test nvmf_mdns_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:32:50.931 18:40:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:32:50.931 18:40:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:50.931 18:40:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.931 ************************************ 00:32:50.931 START TEST nvmf_mdns_discovery 00:32:50.931 ************************************ 00:32:50.931 18:40:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:32:51.189 * Looking for test storage... 00:32:51.189 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:32:51.189 18:40:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:32:51.189 18:40:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@7 -- # uname -s 00:32:51.189 18:40:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:51.189 18:40:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:51.189 18:40:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:51.189 18:40:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:51.189 18:40:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:51.189 18:40:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:51.189 18:40:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:51.189 18:40:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:51.189 18:40:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:51.189 18:40:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:51.189 18:40:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:32:51.190 18:40:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=0b8484e2-e129-4a11-8748-0b3c728771da 00:32:51.190 18:40:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:51.190 18:40:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:51.190 18:40:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:32:51.190 18:40:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:51.190 18:40:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:32:51.190 18:40:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:51.190 18:40:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:51.190 18:40:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:51.190 18:40:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:51.190 18:40:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:51.190 18:40:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:51.190 18:40:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@5 -- # export PATH 00:32:51.190 18:40:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:51.190 18:40:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@47 -- # : 0 00:32:51.190 18:40:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:51.190 18:40:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:51.190 18:40:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:51.190 18:40:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:51.190 18:40:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:51.190 18:40:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:51.190 18:40:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:51.190 18:40:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:51.190 18:40:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@13 -- # DISCOVERY_FILTER=address 00:32:51.190 18:40:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@14 -- # DISCOVERY_PORT=8009 00:32:51.190 18:40:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:32:51.190 18:40:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@18 -- # NQN=nqn.2016-06.io.spdk:cnode 00:32:51.190 18:40:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@19 -- # NQN2=nqn.2016-06.io.spdk:cnode2 00:32:51.190 18:40:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@21 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:32:51.190 18:40:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@22 -- # HOST_SOCK=/tmp/host.sock 00:32:51.190 18:40:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@24 -- # nvmftestinit 00:32:51.190 18:40:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:51.190 18:40:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:51.190 18:40:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:51.190 18:40:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:51.190 18:40:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:51.190 18:40:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:51.190 18:40:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:51.190 18:40:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:51.190 18:40:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:32:51.190 18:40:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:32:51.190 18:40:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:32:51.190 18:40:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:32:51.190 18:40:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:32:51.190 18:40:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@432 -- # nvmf_veth_init 00:32:51.190 18:40:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:51.190 18:40:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:51.190 18:40:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:32:51.190 18:40:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:32:51.190 18:40:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:32:51.190 18:40:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:32:51.190 18:40:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:32:51.190 18:40:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:51.190 18:40:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:32:51.190 18:40:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:32:51.190 18:40:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:32:51.190 18:40:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:32:51.190 18:40:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:32:51.190 18:40:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:32:51.190 Cannot find device "nvmf_tgt_br" 00:32:51.190 18:40:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@155 -- # true 00:32:51.190 18:40:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:32:51.190 Cannot find device "nvmf_tgt_br2" 00:32:51.190 18:40:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@156 -- # true 00:32:51.190 18:40:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:32:51.190 18:40:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:32:51.190 Cannot find device "nvmf_tgt_br" 00:32:51.190 18:40:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@158 -- # true 00:32:51.190 18:40:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:32:51.190 Cannot find device "nvmf_tgt_br2" 00:32:51.190 18:40:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@159 -- # true 00:32:51.190 18:40:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:32:51.190 18:40:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:32:51.190 18:40:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:32:51.190 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:32:51.190 18:40:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@162 -- # true 00:32:51.190 18:40:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:32:51.190 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:32:51.190 18:40:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@163 -- # true 00:32:51.190 18:40:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:32:51.190 18:40:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:32:51.190 18:40:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:32:51.190 18:40:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:32:51.190 18:40:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:32:51.190 18:40:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:32:51.190 18:40:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:32:51.190 18:40:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:32:51.190 18:40:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:32:51.464 18:40:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:32:51.464 18:40:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:32:51.464 18:40:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:32:51.464 18:40:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:32:51.464 18:40:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:32:51.464 18:40:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:32:51.464 18:40:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:32:51.464 18:40:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:32:51.464 18:40:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:32:51.464 18:40:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:32:51.464 18:40:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:32:51.464 18:40:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:32:51.464 18:40:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:32:51.464 18:40:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:32:51.464 18:40:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:32:51.464 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:51.464 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.088 ms 00:32:51.464 00:32:51.464 --- 10.0.0.2 ping statistics --- 00:32:51.464 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:51.464 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:32:51.464 18:40:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:32:51.464 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:32:51.464 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.070 ms 00:32:51.464 00:32:51.464 --- 10.0.0.3 ping statistics --- 00:32:51.464 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:51.464 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:32:51.464 18:40:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:32:51.464 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:51.464 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:32:51.464 00:32:51.464 --- 10.0.0.1 ping statistics --- 00:32:51.464 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:51.464 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:32:51.464 18:40:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:51.464 18:40:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@433 -- # return 0 00:32:51.464 18:40:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:51.464 18:40:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:51.464 18:40:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:51.464 18:40:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:51.464 18:40:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:51.464 18:40:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:51.464 18:40:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:51.464 18:40:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@29 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:32:51.464 18:40:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:51.464 18:40:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:51.464 18:40:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:51.464 18:40:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@481 -- # nvmfpid=105534 00:32:51.464 18:40:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@482 -- # waitforlisten 105534 00:32:51.464 18:40:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@829 -- # '[' -z 105534 ']' 00:32:51.464 18:40:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:51.465 18:40:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:32:51.465 18:40:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:51.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:51.465 18:40:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:51.465 18:40:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:51.465 18:40:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:51.725 [2024-07-22 18:40:03.506464] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:32:51.725 [2024-07-22 18:40:03.506657] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:51.725 [2024-07-22 18:40:03.690699] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:52.289 [2024-07-22 18:40:04.001902] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:52.289 [2024-07-22 18:40:04.002000] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:52.289 [2024-07-22 18:40:04.002047] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:52.289 [2024-07-22 18:40:04.002067] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:52.289 [2024-07-22 18:40:04.002080] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:52.289 [2024-07-22 18:40:04.002141] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:52.547 18:40:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:52.548 18:40:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@862 -- # return 0 00:32:52.548 18:40:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:52.548 18:40:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:52.548 18:40:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:52.548 18:40:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:52.548 18:40:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@31 -- # rpc_cmd nvmf_set_config --discovery-filter=address 00:32:52.548 18:40:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.548 18:40:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:52.548 18:40:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.548 18:40:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@32 -- # rpc_cmd framework_start_init 00:32:52.548 18:40:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.548 18:40:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:52.805 18:40:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.805 18:40:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@33 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:52.805 18:40:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.805 18:40:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:52.805 [2024-07-22 18:40:04.804943] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:52.805 18:40:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.805 18:40:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:32:52.805 18:40:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.805 18:40:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:52.805 [2024-07-22 18:40:04.817080] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:32:52.805 18:40:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.805 18:40:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@36 -- # rpc_cmd bdev_null_create null0 1000 512 00:32:52.805 18:40:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.805 18:40:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:53.116 null0 00:32:53.116 18:40:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.116 18:40:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@37 -- # rpc_cmd bdev_null_create null1 1000 512 00:32:53.116 18:40:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.116 18:40:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:53.116 null1 00:32:53.116 18:40:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.116 18:40:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@38 -- # rpc_cmd bdev_null_create null2 1000 512 00:32:53.116 18:40:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.116 18:40:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:53.116 null2 00:32:53.116 18:40:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.116 18:40:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@39 -- # rpc_cmd bdev_null_create null3 1000 512 00:32:53.116 18:40:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.116 18:40:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:53.116 null3 00:32:53.116 18:40:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.116 18:40:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@40 -- # rpc_cmd bdev_wait_for_examine 00:32:53.116 18:40:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.116 18:40:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:53.116 18:40:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.116 18:40:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@48 -- # hostpid=105584 00:32:53.116 18:40:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:32:53.116 18:40:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@49 -- # waitforlisten 105584 /tmp/host.sock 00:32:53.116 18:40:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@829 -- # '[' -z 105584 ']' 00:32:53.116 18:40:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:32:53.116 18:40:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:53.116 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:32:53.116 18:40:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:32:53.116 18:40:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:53.116 18:40:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:53.116 [2024-07-22 18:40:05.019655] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:32:53.116 [2024-07-22 18:40:05.019991] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105584 ] 00:32:53.375 [2024-07-22 18:40:05.197540] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:53.632 [2024-07-22 18:40:05.509354] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:53.889 18:40:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:53.889 18:40:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@862 -- # return 0 00:32:53.889 18:40:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@51 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;exit 1' SIGINT SIGTERM 00:32:53.889 18:40:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@52 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;nvmftestfini;kill $hostpid;kill $avahipid;' EXIT 00:32:53.889 18:40:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@56 -- # avahi-daemon --kill 00:32:54.146 18:40:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@58 -- # avahipid=105613 00:32:54.146 18:40:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@59 -- # sleep 1 00:32:54.146 18:40:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@57 -- # ip netns exec nvmf_tgt_ns_spdk avahi-daemon -f /dev/fd/63 00:32:54.146 18:40:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@57 -- # echo -e '[server]\nallow-interfaces=nvmf_tgt_if,nvmf_tgt_if2\nuse-ipv4=yes\nuse-ipv6=no' 00:32:54.146 Process 984 died: No such process; trying to remove PID file. (/run/avahi-daemon//pid) 00:32:54.146 Found user 'avahi' (UID 70) and group 'avahi' (GID 70). 00:32:54.146 Successfully dropped root privileges. 00:32:54.146 avahi-daemon 0.8 starting up. 00:32:54.146 WARNING: No NSS support for mDNS detected, consider installing nss-mdns! 00:32:54.146 Successfully called chroot(). 00:32:54.146 Successfully dropped remaining capabilities. 00:32:54.146 No service file found in /etc/avahi/services. 00:32:55.075 Joining mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:32:55.075 New relevant interface nvmf_tgt_if2.IPv4 for mDNS. 00:32:55.075 Joining mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:32:55.075 New relevant interface nvmf_tgt_if.IPv4 for mDNS. 00:32:55.075 Network interface enumeration completed. 00:32:55.075 Registering new address record for fe80::7d:9bff:fe30:67a3 on nvmf_tgt_if2.*. 00:32:55.075 Registering new address record for 10.0.0.3 on nvmf_tgt_if2.IPv4. 00:32:55.075 Registering new address record for fe80::e073:5fff:fecc:6446 on nvmf_tgt_if.*. 00:32:55.075 Registering new address record for 10.0.0.2 on nvmf_tgt_if.IPv4. 00:32:55.075 Server startup complete. Host name is fedora38-cloud-1716830599-074-updated-1705279005.local. Local service cookie is 155005584. 00:32:55.075 18:40:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@61 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:32:55.075 18:40:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.075 18:40:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:55.075 18:40:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.075 18:40:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@62 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:32:55.075 18:40:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.075 18:40:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:55.075 18:40:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.075 18:40:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@86 -- # notify_id=0 00:32:55.075 18:40:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # get_subsystem_names 00:32:55.075 18:40:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:55.075 18:40:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.075 18:40:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:55.075 18:40:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:32:55.075 18:40:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:32:55.075 18:40:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:32:55.075 18:40:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.075 18:40:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # [[ '' == '' ]] 00:32:55.075 18:40:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # get_bdev_list 00:32:55.075 18:40:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:55.075 18:40:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:32:55.075 18:40:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.075 18:40:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:32:55.075 18:40:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:55.075 18:40:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:32:55.075 18:40:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.332 18:40:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # [[ '' == '' ]] 00:32:55.332 18:40:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:32:55.332 18:40:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.332 18:40:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:55.332 18:40:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.332 18:40:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # get_subsystem_names 00:32:55.332 18:40:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:55.332 18:40:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.332 18:40:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:55.332 18:40:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:32:55.332 18:40:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:32:55.332 18:40:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:32:55.332 18:40:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.332 18:40:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ '' == '' ]] 00:32:55.332 18:40:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@97 -- # get_bdev_list 00:32:55.332 18:40:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:55.332 18:40:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:32:55.332 18:40:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:32:55.332 18:40:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.332 18:40:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:55.332 18:40:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:32:55.332 18:40:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.332 18:40:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@97 -- # [[ '' == '' ]] 00:32:55.332 18:40:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@99 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:32:55.332 18:40:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.332 18:40:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:55.332 18:40:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.332 18:40:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@100 -- # get_subsystem_names 00:32:55.332 18:40:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:55.333 18:40:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.333 18:40:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:55.333 18:40:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:32:55.333 18:40:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:32:55.333 18:40:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:32:55.333 18:40:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.333 18:40:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@100 -- # [[ '' == '' ]] 00:32:55.333 18:40:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@101 -- # get_bdev_list 00:32:55.333 18:40:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:55.333 18:40:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:32:55.333 18:40:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.333 18:40:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:32:55.333 18:40:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:32:55.333 18:40:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:55.333 18:40:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.333 [2024-07-22 18:40:07.317690] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:32:55.590 18:40:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@101 -- # [[ '' == '' ]] 00:32:55.590 18:40:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@105 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:55.590 18:40:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.590 18:40:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:55.590 [2024-07-22 18:40:07.374345] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:55.590 18:40:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.590 18:40:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@109 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:32:55.590 18:40:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.590 18:40:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:55.590 18:40:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.590 18:40:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@112 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20 00:32:55.590 18:40:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.590 18:40:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:55.590 18:40:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.590 18:40:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@113 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null2 00:32:55.590 18:40:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.590 18:40:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:55.590 18:40:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.590 18:40:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode20 nqn.2021-12.io.spdk:test 00:32:55.590 18:40:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.590 18:40:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:55.590 18:40:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.590 18:40:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@119 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:32:55.590 18:40:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.590 18:40:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:55.591 [2024-07-22 18:40:07.414166] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:32:55.591 18:40:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.591 18:40:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@121 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:32:55.591 18:40:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.591 18:40:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:55.591 [2024-07-22 18:40:07.422220] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:32:55.591 18:40:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.591 18:40:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@124 -- # rpc_cmd nvmf_publish_mdns_prr 00:32:55.591 18:40:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.591 18:40:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:55.591 18:40:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.591 18:40:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@125 -- # sleep 5 00:32:56.524 [2024-07-22 18:40:08.217714] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:32:57.090 [2024-07-22 18:40:08.817781] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:32:57.090 [2024-07-22 18:40:08.817899] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.3) 00:32:57.090 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:32:57.090 cookie is 0 00:32:57.090 is_local: 1 00:32:57.090 our_own: 0 00:32:57.090 wide_area: 0 00:32:57.090 multicast: 1 00:32:57.090 cached: 1 00:32:57.090 [2024-07-22 18:40:08.917751] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:32:57.090 [2024-07-22 18:40:08.917856] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.3) 00:32:57.090 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:32:57.090 cookie is 0 00:32:57.090 is_local: 1 00:32:57.090 our_own: 0 00:32:57.090 wide_area: 0 00:32:57.090 multicast: 1 00:32:57.090 cached: 1 00:32:57.090 [2024-07-22 18:40:08.917890] bdev_mdns_client.c: 322:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.3 trid->trsvcid: 8009 00:32:57.090 [2024-07-22 18:40:09.017772] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:32:57.090 [2024-07-22 18:40:09.017848] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.2) 00:32:57.090 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:32:57.090 cookie is 0 00:32:57.090 is_local: 1 00:32:57.090 our_own: 0 00:32:57.090 wide_area: 0 00:32:57.090 multicast: 1 00:32:57.090 cached: 1 00:32:57.348 [2024-07-22 18:40:09.117754] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:32:57.348 [2024-07-22 18:40:09.117823] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.2) 00:32:57.348 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:32:57.348 cookie is 0 00:32:57.348 is_local: 1 00:32:57.348 our_own: 0 00:32:57.348 wide_area: 0 00:32:57.348 multicast: 1 00:32:57.348 cached: 1 00:32:57.348 [2024-07-22 18:40:09.117883] bdev_mdns_client.c: 322:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.2 trid->trsvcid: 8009 00:32:57.915 [2024-07-22 18:40:09.833383] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:32:57.915 [2024-07-22 18:40:09.833471] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:32:57.915 [2024-07-22 18:40:09.833529] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:32:57.915 [2024-07-22 18:40:09.921691] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 new subsystem mdns0_nvme0 00:32:58.173 [2024-07-22 18:40:09.985328] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:32:58.173 [2024-07-22 18:40:09.985400] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:32:58.173 [2024-07-22 18:40:10.033142] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:58.173 [2024-07-22 18:40:10.033199] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:58.173 [2024-07-22 18:40:10.033255] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:58.173 [2024-07-22 18:40:10.121348] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem mdns1_nvme0 00:32:58.173 [2024-07-22 18:40:10.185795] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:32:58.173 [2024-07-22 18:40:10.185903] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:33:00.705 18:40:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@127 -- # get_mdns_discovery_svcs 00:33:00.705 18:40:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:33:00.705 18:40:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:33:00.705 18:40:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.705 18:40:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:00.705 18:40:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:33:00.705 18:40:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:33:00.705 18:40:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.705 18:40:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@127 -- # [[ mdns == \m\d\n\s ]] 00:33:00.705 18:40:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@128 -- # get_discovery_ctrlrs 00:33:00.705 18:40:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:33:00.705 18:40:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.705 18:40:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:00.705 18:40:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:33:00.705 18:40:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:33:00.705 18:40:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:33:00.705 18:40:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.705 18:40:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@128 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:33:00.705 18:40:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@129 -- # get_subsystem_names 00:33:00.705 18:40:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:00.705 18:40:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.705 18:40:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:00.705 18:40:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:33:00.705 18:40:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:33:00.705 18:40:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:33:00.705 18:40:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.705 18:40:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@129 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:33:00.705 18:40:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@130 -- # get_bdev_list 00:33:00.705 18:40:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:00.705 18:40:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:33:00.705 18:40:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.705 18:40:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:00.705 18:40:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:33:00.705 18:40:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:33:00.705 18:40:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.705 18:40:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@130 -- # [[ mdns0_nvme0n1 mdns1_nvme0n1 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\1 ]] 00:33:00.705 18:40:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@131 -- # get_subsystem_paths mdns0_nvme0 00:33:00.705 18:40:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:33:00.705 18:40:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.705 18:40:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:00.705 18:40:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:33:00.705 18:40:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:33:00.705 18:40:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:33:00.705 18:40:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.964 18:40:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@131 -- # [[ 4420 == \4\4\2\0 ]] 00:33:00.964 18:40:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@132 -- # get_subsystem_paths mdns1_nvme0 00:33:00.964 18:40:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:33:00.964 18:40:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.964 18:40:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:00.964 18:40:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:33:00.964 18:40:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:33:00.964 18:40:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:33:00.964 18:40:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.964 18:40:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@132 -- # [[ 4420 == \4\4\2\0 ]] 00:33:00.964 18:40:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@133 -- # get_notification_count 00:33:00.964 18:40:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:33:00.964 18:40:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:33:00.964 18:40:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.964 18:40:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:00.964 18:40:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.964 18:40:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=2 00:33:00.964 18:40:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=2 00:33:00.964 18:40:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@134 -- # [[ 2 == 2 ]] 00:33:00.964 18:40:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@137 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:33:00.964 18:40:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.964 18:40:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:00.964 18:40:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.964 18:40:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@138 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null3 00:33:00.964 18:40:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.964 18:40:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:00.964 18:40:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.964 18:40:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@139 -- # sleep 1 00:33:01.898 18:40:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@141 -- # get_bdev_list 00:33:01.898 18:40:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:01.898 18:40:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:33:01.898 18:40:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:33:01.898 18:40:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.898 18:40:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:33:01.898 18:40:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:02.156 18:40:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.157 18:40:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@141 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:33:02.157 18:40:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@142 -- # get_notification_count 00:33:02.157 18:40:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:33:02.157 18:40:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.157 18:40:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:02.157 18:40:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:33:02.157 18:40:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.157 18:40:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=2 00:33:02.157 18:40:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=4 00:33:02.157 18:40:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@143 -- # [[ 2 == 2 ]] 00:33:02.157 18:40:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@147 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:33:02.157 18:40:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.157 18:40:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:02.157 [2024-07-22 18:40:13.978600] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:02.157 [2024-07-22 18:40:13.979494] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:33:02.157 [2024-07-22 18:40:13.979565] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:33:02.157 [2024-07-22 18:40:13.979633] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:33:02.157 [2024-07-22 18:40:13.979660] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:02.157 18:40:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.157 18:40:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@148 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4421 00:33:02.157 18:40:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.157 18:40:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:02.157 [2024-07-22 18:40:13.986402] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:33:02.157 [2024-07-22 18:40:13.987450] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:33:02.157 [2024-07-22 18:40:13.987536] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:33:02.157 18:40:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.157 18:40:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@149 -- # sleep 1 00:33:02.157 [2024-07-22 18:40:14.118630] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new path for mdns0_nvme0 00:33:02.157 [2024-07-22 18:40:14.119083] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for mdns1_nvme0 00:33:02.415 [2024-07-22 18:40:14.177572] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:33:02.415 [2024-07-22 18:40:14.177620] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:33:02.415 [2024-07-22 18:40:14.177643] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:33:02.415 [2024-07-22 18:40:14.177679] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:02.415 [2024-07-22 18:40:14.178193] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:33:02.415 [2024-07-22 18:40:14.178227] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:33:02.415 [2024-07-22 18:40:14.178239] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:33:02.415 [2024-07-22 18:40:14.178276] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:33:02.415 [2024-07-22 18:40:14.223854] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:33:02.415 [2024-07-22 18:40:14.223898] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:33:02.415 [2024-07-22 18:40:14.224825] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:33:02.415 [2024-07-22 18:40:14.224868] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:33:02.981 18:40:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@151 -- # get_subsystem_names 00:33:03.248 18:40:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:03.248 18:40:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:33:03.248 18:40:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:33:03.248 18:40:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.248 18:40:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:33:03.248 18:40:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:03.248 18:40:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.248 18:40:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@151 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:33:03.248 18:40:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@152 -- # get_bdev_list 00:33:03.248 18:40:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:03.248 18:40:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.248 18:40:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:03.248 18:40:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:33:03.248 18:40:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:33:03.248 18:40:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:33:03.248 18:40:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.248 18:40:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@152 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:33:03.248 18:40:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@153 -- # get_subsystem_paths mdns0_nvme0 00:33:03.248 18:40:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:33:03.248 18:40:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:33:03.248 18:40:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.248 18:40:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:03.248 18:40:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:33:03.248 18:40:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:33:03.248 18:40:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.248 18:40:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@153 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:33:03.248 18:40:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@154 -- # get_subsystem_paths mdns1_nvme0 00:33:03.248 18:40:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:33:03.248 18:40:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:33:03.248 18:40:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.248 18:40:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:03.248 18:40:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:33:03.248 18:40:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:33:03.248 18:40:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.532 18:40:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@154 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:33:03.532 18:40:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@155 -- # get_notification_count 00:33:03.532 18:40:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:33:03.532 18:40:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:33:03.532 18:40:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.532 18:40:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:03.532 18:40:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.532 18:40:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=0 00:33:03.532 18:40:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=4 00:33:03.532 18:40:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@156 -- # [[ 0 == 0 ]] 00:33:03.532 18:40:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@160 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:03.532 18:40:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.532 18:40:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:03.532 [2024-07-22 18:40:15.328148] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:33:03.532 [2024-07-22 18:40:15.328220] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:33:03.532 [2024-07-22 18:40:15.328284] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:33:03.532 [2024-07-22 18:40:15.328314] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:03.532 18:40:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.532 18:40:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@161 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:33:03.532 18:40:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.532 18:40:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:03.532 [2024-07-22 18:40:15.334036] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:03.532 [2024-07-22 18:40:15.334083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:03.532 [2024-07-22 18:40:15.334106] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:03.532 [2024-07-22 18:40:15.334122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:03.532 [2024-07-22 18:40:15.334138] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:03.532 [2024-07-22 18:40:15.334153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:03.532 [2024-07-22 18:40:15.334169] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:03.532 [2024-07-22 18:40:15.334183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:03.532 [2024-07-22 18:40:15.334197] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b280 is same with the state(5) to be set 00:33:03.532 [2024-07-22 18:40:15.340347] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:33:03.532 [2024-07-22 18:40:15.340432] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:33:03.532 [2024-07-22 18:40:15.340748] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:03.532 [2024-07-22 18:40:15.340785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:03.532 [2024-07-22 18:40:15.340803] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:03.532 [2024-07-22 18:40:15.340818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:03.532 [2024-07-22 18:40:15.340846] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:03.532 [2024-07-22 18:40:15.340863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:03.532 [2024-07-22 18:40:15.340879] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:03.532 [2024-07-22 18:40:15.340893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:03.532 [2024-07-22 18:40:15.340906] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ab00 is same with the state(5) to be set 00:33:03.532 [2024-07-22 18:40:15.343956] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002b280 (9): Bad file descriptor 00:33:03.532 18:40:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.532 18:40:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@162 -- # sleep 1 00:33:03.532 [2024-07-22 18:40:15.350705] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:33:03.533 [2024-07-22 18:40:15.353977] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:03.533 [2024-07-22 18:40:15.354149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.533 [2024-07-22 18:40:15.354187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002b280 with addr=10.0.0.2, port=4420 00:33:03.533 [2024-07-22 18:40:15.354229] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b280 is same with the state(5) to be set 00:33:03.533 [2024-07-22 18:40:15.354257] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002b280 (9): Bad file descriptor 00:33:03.533 [2024-07-22 18:40:15.354281] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:03.533 [2024-07-22 18:40:15.354303] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:03.533 [2024-07-22 18:40:15.354322] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:03.533 [2024-07-22 18:40:15.354348] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.533 [2024-07-22 18:40:15.360724] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:33:03.533 [2024-07-22 18:40:15.360895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.533 [2024-07-22 18:40:15.360927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002ab00 with addr=10.0.0.3, port=4420 00:33:03.533 [2024-07-22 18:40:15.360944] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ab00 is same with the state(5) to be set 00:33:03.533 [2024-07-22 18:40:15.360970] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:33:03.533 [2024-07-22 18:40:15.360994] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:33:03.533 [2024-07-22 18:40:15.361008] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:33:03.533 [2024-07-22 18:40:15.361022] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:33:03.533 [2024-07-22 18:40:15.361045] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.533 [2024-07-22 18:40:15.364066] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:03.533 [2024-07-22 18:40:15.364186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.533 [2024-07-22 18:40:15.364214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002b280 with addr=10.0.0.2, port=4420 00:33:03.533 [2024-07-22 18:40:15.364229] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b280 is same with the state(5) to be set 00:33:03.533 [2024-07-22 18:40:15.364253] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002b280 (9): Bad file descriptor 00:33:03.533 [2024-07-22 18:40:15.364289] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:03.533 [2024-07-22 18:40:15.364303] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:03.533 [2024-07-22 18:40:15.364317] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:03.533 [2024-07-22 18:40:15.364338] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.533 [2024-07-22 18:40:15.370822] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:33:03.533 [2024-07-22 18:40:15.370964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.533 [2024-07-22 18:40:15.370995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002ab00 with addr=10.0.0.3, port=4420 00:33:03.533 [2024-07-22 18:40:15.371011] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ab00 is same with the state(5) to be set 00:33:03.533 [2024-07-22 18:40:15.371035] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:33:03.533 [2024-07-22 18:40:15.371058] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:33:03.533 [2024-07-22 18:40:15.371070] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:33:03.533 [2024-07-22 18:40:15.371084] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:33:03.533 [2024-07-22 18:40:15.371106] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.533 [2024-07-22 18:40:15.374155] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:03.533 [2024-07-22 18:40:15.374262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.533 [2024-07-22 18:40:15.374301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002b280 with addr=10.0.0.2, port=4420 00:33:03.533 [2024-07-22 18:40:15.374317] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b280 is same with the state(5) to be set 00:33:03.533 [2024-07-22 18:40:15.374342] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002b280 (9): Bad file descriptor 00:33:03.533 [2024-07-22 18:40:15.374364] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:03.533 [2024-07-22 18:40:15.374378] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:03.533 [2024-07-22 18:40:15.374391] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:03.533 [2024-07-22 18:40:15.374413] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.533 [2024-07-22 18:40:15.380927] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:33:03.533 [2024-07-22 18:40:15.381048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.533 [2024-07-22 18:40:15.381075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002ab00 with addr=10.0.0.3, port=4420 00:33:03.533 [2024-07-22 18:40:15.381091] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ab00 is same with the state(5) to be set 00:33:03.533 [2024-07-22 18:40:15.381115] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:33:03.533 [2024-07-22 18:40:15.381137] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:33:03.533 [2024-07-22 18:40:15.381151] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:33:03.533 [2024-07-22 18:40:15.381166] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:33:03.533 [2024-07-22 18:40:15.381187] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.533 [2024-07-22 18:40:15.384227] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:03.533 [2024-07-22 18:40:15.384368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.533 [2024-07-22 18:40:15.384397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002b280 with addr=10.0.0.2, port=4420 00:33:03.533 [2024-07-22 18:40:15.384414] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b280 is same with the state(5) to be set 00:33:03.533 [2024-07-22 18:40:15.384438] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002b280 (9): Bad file descriptor 00:33:03.533 [2024-07-22 18:40:15.384485] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:03.533 [2024-07-22 18:40:15.384501] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:03.533 [2024-07-22 18:40:15.384515] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:03.533 [2024-07-22 18:40:15.384551] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.533 [2024-07-22 18:40:15.391015] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:33:03.533 [2024-07-22 18:40:15.391141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.533 [2024-07-22 18:40:15.391170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002ab00 with addr=10.0.0.3, port=4420 00:33:03.533 [2024-07-22 18:40:15.391186] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ab00 is same with the state(5) to be set 00:33:03.533 [2024-07-22 18:40:15.391210] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:33:03.533 [2024-07-22 18:40:15.391231] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:33:03.533 [2024-07-22 18:40:15.391262] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:33:03.533 [2024-07-22 18:40:15.391285] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:33:03.533 [2024-07-22 18:40:15.391308] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.533 [2024-07-22 18:40:15.394325] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:03.533 [2024-07-22 18:40:15.394461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.533 [2024-07-22 18:40:15.394489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002b280 with addr=10.0.0.2, port=4420 00:33:03.533 [2024-07-22 18:40:15.394506] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b280 is same with the state(5) to be set 00:33:03.533 [2024-07-22 18:40:15.394530] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002b280 (9): Bad file descriptor 00:33:03.533 [2024-07-22 18:40:15.394578] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:03.533 [2024-07-22 18:40:15.394598] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:03.533 [2024-07-22 18:40:15.394612] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:03.533 [2024-07-22 18:40:15.394634] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.533 [2024-07-22 18:40:15.401103] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:33:03.533 [2024-07-22 18:40:15.401287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.533 [2024-07-22 18:40:15.401316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002ab00 with addr=10.0.0.3, port=4420 00:33:03.533 [2024-07-22 18:40:15.401333] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ab00 is same with the state(5) to be set 00:33:03.533 [2024-07-22 18:40:15.401356] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:33:03.533 [2024-07-22 18:40:15.401378] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:33:03.533 [2024-07-22 18:40:15.401391] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:33:03.533 [2024-07-22 18:40:15.401406] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:33:03.533 [2024-07-22 18:40:15.401427] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.534 [2024-07-22 18:40:15.404428] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:03.534 [2024-07-22 18:40:15.404540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.534 [2024-07-22 18:40:15.404568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002b280 with addr=10.0.0.2, port=4420 00:33:03.534 [2024-07-22 18:40:15.404584] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b280 is same with the state(5) to be set 00:33:03.534 [2024-07-22 18:40:15.404608] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002b280 (9): Bad file descriptor 00:33:03.534 [2024-07-22 18:40:15.404696] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:03.534 [2024-07-22 18:40:15.404715] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:03.534 [2024-07-22 18:40:15.404728] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:03.534 [2024-07-22 18:40:15.404750] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.534 [2024-07-22 18:40:15.411234] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:33:03.534 [2024-07-22 18:40:15.411365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.534 [2024-07-22 18:40:15.411393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002ab00 with addr=10.0.0.3, port=4420 00:33:03.534 [2024-07-22 18:40:15.411410] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ab00 is same with the state(5) to be set 00:33:03.534 [2024-07-22 18:40:15.411434] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:33:03.534 [2024-07-22 18:40:15.411456] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:33:03.534 [2024-07-22 18:40:15.411469] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:33:03.534 [2024-07-22 18:40:15.411482] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:33:03.534 [2024-07-22 18:40:15.411504] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.534 [2024-07-22 18:40:15.414506] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:03.534 [2024-07-22 18:40:15.414622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.534 [2024-07-22 18:40:15.414650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002b280 with addr=10.0.0.2, port=4420 00:33:03.534 [2024-07-22 18:40:15.414665] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b280 is same with the state(5) to be set 00:33:03.534 [2024-07-22 18:40:15.414688] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002b280 (9): Bad file descriptor 00:33:03.534 [2024-07-22 18:40:15.414733] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:03.534 [2024-07-22 18:40:15.414750] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:03.534 [2024-07-22 18:40:15.414763] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:03.534 [2024-07-22 18:40:15.414800] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.534 [2024-07-22 18:40:15.421321] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:33:03.534 [2024-07-22 18:40:15.421441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.534 [2024-07-22 18:40:15.421468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002ab00 with addr=10.0.0.3, port=4420 00:33:03.534 [2024-07-22 18:40:15.421484] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ab00 is same with the state(5) to be set 00:33:03.534 [2024-07-22 18:40:15.421508] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:33:03.534 [2024-07-22 18:40:15.421530] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:33:03.534 [2024-07-22 18:40:15.421544] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:33:03.534 [2024-07-22 18:40:15.421557] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:33:03.534 [2024-07-22 18:40:15.421579] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.534 [2024-07-22 18:40:15.424590] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:03.534 [2024-07-22 18:40:15.424708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.534 [2024-07-22 18:40:15.424738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002b280 with addr=10.0.0.2, port=4420 00:33:03.534 [2024-07-22 18:40:15.424754] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b280 is same with the state(5) to be set 00:33:03.534 [2024-07-22 18:40:15.424792] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002b280 (9): Bad file descriptor 00:33:03.534 [2024-07-22 18:40:15.424872] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:03.534 [2024-07-22 18:40:15.424891] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:03.534 [2024-07-22 18:40:15.424905] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:03.534 [2024-07-22 18:40:15.424927] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.534 [2024-07-22 18:40:15.431410] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:33:03.534 [2024-07-22 18:40:15.431537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.534 [2024-07-22 18:40:15.431567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002ab00 with addr=10.0.0.3, port=4420 00:33:03.534 [2024-07-22 18:40:15.431584] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ab00 is same with the state(5) to be set 00:33:03.534 [2024-07-22 18:40:15.431609] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:33:03.534 [2024-07-22 18:40:15.431632] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:33:03.534 [2024-07-22 18:40:15.431645] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:33:03.534 [2024-07-22 18:40:15.431659] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:33:03.534 [2024-07-22 18:40:15.431680] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.534 [2024-07-22 18:40:15.434670] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:03.534 [2024-07-22 18:40:15.434789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.534 [2024-07-22 18:40:15.434817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002b280 with addr=10.0.0.2, port=4420 00:33:03.534 [2024-07-22 18:40:15.434833] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b280 is same with the state(5) to be set 00:33:03.534 [2024-07-22 18:40:15.434886] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002b280 (9): Bad file descriptor 00:33:03.534 [2024-07-22 18:40:15.434935] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:03.534 [2024-07-22 18:40:15.434953] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:03.534 [2024-07-22 18:40:15.434968] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:03.534 [2024-07-22 18:40:15.434990] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.534 [2024-07-22 18:40:15.441501] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:33:03.534 [2024-07-22 18:40:15.441621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.534 [2024-07-22 18:40:15.441651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002ab00 with addr=10.0.0.3, port=4420 00:33:03.534 [2024-07-22 18:40:15.441669] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ab00 is same with the state(5) to be set 00:33:03.534 [2024-07-22 18:40:15.441693] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:33:03.534 [2024-07-22 18:40:15.441716] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:33:03.534 [2024-07-22 18:40:15.441730] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:33:03.534 [2024-07-22 18:40:15.441743] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:33:03.534 [2024-07-22 18:40:15.441773] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.534 [2024-07-22 18:40:15.444755] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:03.534 [2024-07-22 18:40:15.444885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.534 [2024-07-22 18:40:15.444915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002b280 with addr=10.0.0.2, port=4420 00:33:03.534 [2024-07-22 18:40:15.444931] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b280 is same with the state(5) to be set 00:33:03.534 [2024-07-22 18:40:15.444956] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002b280 (9): Bad file descriptor 00:33:03.534 [2024-07-22 18:40:15.445001] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:03.534 [2024-07-22 18:40:15.445018] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:03.534 [2024-07-22 18:40:15.445032] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:03.534 [2024-07-22 18:40:15.445055] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.534 [2024-07-22 18:40:15.451583] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:33:03.534 [2024-07-22 18:40:15.451716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.534 [2024-07-22 18:40:15.451745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002ab00 with addr=10.0.0.3, port=4420 00:33:03.534 [2024-07-22 18:40:15.451762] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ab00 is same with the state(5) to be set 00:33:03.534 [2024-07-22 18:40:15.451786] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:33:03.534 [2024-07-22 18:40:15.451808] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:33:03.534 [2024-07-22 18:40:15.451821] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:33:03.534 [2024-07-22 18:40:15.451834] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:33:03.534 [2024-07-22 18:40:15.451887] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.534 [2024-07-22 18:40:15.454838] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:03.535 [2024-07-22 18:40:15.454967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.535 [2024-07-22 18:40:15.454994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002b280 with addr=10.0.0.2, port=4420 00:33:03.535 [2024-07-22 18:40:15.455010] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b280 is same with the state(5) to be set 00:33:03.535 [2024-07-22 18:40:15.455034] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002b280 (9): Bad file descriptor 00:33:03.535 [2024-07-22 18:40:15.455076] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:03.535 [2024-07-22 18:40:15.455092] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:03.535 [2024-07-22 18:40:15.455105] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:03.535 [2024-07-22 18:40:15.455141] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.535 [2024-07-22 18:40:15.461669] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:33:03.535 [2024-07-22 18:40:15.461787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.535 [2024-07-22 18:40:15.461813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002ab00 with addr=10.0.0.3, port=4420 00:33:03.535 [2024-07-22 18:40:15.461829] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ab00 is same with the state(5) to be set 00:33:03.535 [2024-07-22 18:40:15.461882] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:33:03.535 [2024-07-22 18:40:15.461907] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:33:03.535 [2024-07-22 18:40:15.461921] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:33:03.535 [2024-07-22 18:40:15.461935] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:33:03.535 [2024-07-22 18:40:15.461957] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.535 [2024-07-22 18:40:15.464934] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:03.535 [2024-07-22 18:40:15.465060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.535 [2024-07-22 18:40:15.465088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002b280 with addr=10.0.0.2, port=4420 00:33:03.535 [2024-07-22 18:40:15.465104] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b280 is same with the state(5) to be set 00:33:03.535 [2024-07-22 18:40:15.465128] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002b280 (9): Bad file descriptor 00:33:03.535 [2024-07-22 18:40:15.465173] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:03.535 [2024-07-22 18:40:15.465190] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:03.535 [2024-07-22 18:40:15.465203] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:03.535 [2024-07-22 18:40:15.465241] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.535 [2024-07-22 18:40:15.471753] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:33:03.535 [2024-07-22 18:40:15.471884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.535 [2024-07-22 18:40:15.471912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002ab00 with addr=10.0.0.3, port=4420 00:33:03.535 [2024-07-22 18:40:15.471928] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ab00 is same with the state(5) to be set 00:33:03.535 [2024-07-22 18:40:15.471952] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:33:03.535 [2024-07-22 18:40:15.471973] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:33:03.535 [2024-07-22 18:40:15.471986] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:33:03.535 [2024-07-22 18:40:15.471999] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:33:03.535 [2024-07-22 18:40:15.472021] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.535 [2024-07-22 18:40:15.472243] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 not found 00:33:03.535 [2024-07-22 18:40:15.472296] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:33:03.535 [2024-07-22 18:40:15.472354] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:33:03.535 [2024-07-22 18:40:15.472424] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:33:03.535 [2024-07-22 18:40:15.472451] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:33:03.535 [2024-07-22 18:40:15.472479] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:03.793 [2024-07-22 18:40:15.558322] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:33:03.793 [2024-07-22 18:40:15.558493] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:33:04.359 18:40:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@164 -- # get_subsystem_names 00:33:04.359 18:40:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:04.359 18:40:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.359 18:40:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:04.359 18:40:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:33:04.359 18:40:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:33:04.359 18:40:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:33:04.359 18:40:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.617 18:40:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@164 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:33:04.617 18:40:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@165 -- # get_bdev_list 00:33:04.617 18:40:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:04.617 18:40:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:33:04.617 18:40:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.617 18:40:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:33:04.617 18:40:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:04.617 18:40:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:33:04.617 18:40:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.617 18:40:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@165 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:33:04.617 18:40:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@166 -- # get_subsystem_paths mdns0_nvme0 00:33:04.617 18:40:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:33:04.617 18:40:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:33:04.617 18:40:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.617 18:40:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:33:04.617 18:40:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:04.617 18:40:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:33:04.617 18:40:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.617 18:40:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@166 -- # [[ 4421 == \4\4\2\1 ]] 00:33:04.617 18:40:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@167 -- # get_subsystem_paths mdns1_nvme0 00:33:04.617 18:40:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:33:04.617 18:40:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.617 18:40:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:33:04.617 18:40:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:33:04.617 18:40:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:04.617 18:40:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:33:04.617 18:40:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.617 18:40:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@167 -- # [[ 4421 == \4\4\2\1 ]] 00:33:04.617 18:40:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@168 -- # get_notification_count 00:33:04.617 18:40:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:33:04.617 18:40:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:33:04.617 18:40:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.617 18:40:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:04.617 18:40:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.617 18:40:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=0 00:33:04.617 18:40:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=4 00:33:04.617 18:40:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@169 -- # [[ 0 == 0 ]] 00:33:04.617 18:40:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@171 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:33:04.617 18:40:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.617 18:40:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:04.876 18:40:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.876 18:40:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@172 -- # sleep 1 00:33:04.876 [2024-07-22 18:40:16.719454] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:33:05.810 18:40:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@174 -- # get_mdns_discovery_svcs 00:33:05.810 18:40:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:33:05.810 18:40:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:33:05.810 18:40:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:33:05.810 18:40:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:33:05.810 18:40:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.810 18:40:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:05.810 18:40:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.810 18:40:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@174 -- # [[ '' == '' ]] 00:33:05.810 18:40:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@175 -- # get_subsystem_names 00:33:05.810 18:40:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:05.810 18:40:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.811 18:40:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:05.811 18:40:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:33:05.811 18:40:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:33:05.811 18:40:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:33:05.811 18:40:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.811 18:40:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@175 -- # [[ '' == '' ]] 00:33:05.811 18:40:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@176 -- # get_bdev_list 00:33:05.811 18:40:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:05.811 18:40:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:33:05.811 18:40:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.811 18:40:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:05.811 18:40:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:33:05.811 18:40:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:33:05.811 18:40:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.811 18:40:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@176 -- # [[ '' == '' ]] 00:33:05.811 18:40:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@177 -- # get_notification_count 00:33:05.811 18:40:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:33:05.811 18:40:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.811 18:40:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:05.811 18:40:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:33:05.811 18:40:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.070 18:40:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=4 00:33:06.070 18:40:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=8 00:33:06.070 18:40:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@178 -- # [[ 4 == 4 ]] 00:33:06.070 18:40:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@181 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:33:06.070 18:40:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.070 18:40:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:06.070 18:40:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.070 18:40:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@182 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:33:06.070 18:40:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@648 -- # local es=0 00:33:06.070 18:40:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:33:06.070 18:40:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:33:06.070 18:40:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:06.070 18:40:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:33:06.070 18:40:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:06.070 18:40:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:33:06.070 18:40:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.070 18:40:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:06.070 [2024-07-22 18:40:17.883642] bdev_mdns_client.c: 470:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running with name mdns 00:33:06.070 2024/07/22 18:40:17 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:mdns svcname:_nvme-disc._http], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:33:06.070 request: 00:33:06.070 { 00:33:06.070 "method": "bdev_nvme_start_mdns_discovery", 00:33:06.070 "params": { 00:33:06.070 "name": "mdns", 00:33:06.070 "svcname": "_nvme-disc._http", 00:33:06.070 "hostnqn": "nqn.2021-12.io.spdk:test" 00:33:06.070 } 00:33:06.070 } 00:33:06.070 Got JSON-RPC error response 00:33:06.070 GoRPCClient: error on JSON-RPC call 00:33:06.070 18:40:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:33:06.070 18:40:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@651 -- # es=1 00:33:06.070 18:40:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:06.070 18:40:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:06.070 18:40:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:06.070 18:40:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@183 -- # sleep 5 00:33:06.637 [2024-07-22 18:40:18.472648] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:33:06.637 [2024-07-22 18:40:18.572597] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:33:06.896 [2024-07-22 18:40:18.672622] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:33:06.896 [2024-07-22 18:40:18.672673] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.3) 00:33:06.896 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:33:06.896 cookie is 0 00:33:06.896 is_local: 1 00:33:06.896 our_own: 0 00:33:06.896 wide_area: 0 00:33:06.896 multicast: 1 00:33:06.896 cached: 1 00:33:06.896 [2024-07-22 18:40:18.772616] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:33:06.896 [2024-07-22 18:40:18.772693] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.3) 00:33:06.896 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:33:06.896 cookie is 0 00:33:06.896 is_local: 1 00:33:06.896 our_own: 0 00:33:06.896 wide_area: 0 00:33:06.896 multicast: 1 00:33:06.896 cached: 1 00:33:06.896 [2024-07-22 18:40:18.772718] bdev_mdns_client.c: 322:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.3 trid->trsvcid: 8009 00:33:06.896 [2024-07-22 18:40:18.872625] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:33:06.896 [2024-07-22 18:40:18.872679] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.2) 00:33:06.896 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:33:06.896 cookie is 0 00:33:06.896 is_local: 1 00:33:06.896 our_own: 0 00:33:06.896 wide_area: 0 00:33:06.896 multicast: 1 00:33:06.896 cached: 1 00:33:07.154 [2024-07-22 18:40:18.972674] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:33:07.154 [2024-07-22 18:40:18.972755] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.2) 00:33:07.154 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:33:07.154 cookie is 0 00:33:07.154 is_local: 1 00:33:07.154 our_own: 0 00:33:07.154 wide_area: 0 00:33:07.154 multicast: 1 00:33:07.154 cached: 1 00:33:07.154 [2024-07-22 18:40:18.972784] bdev_mdns_client.c: 322:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.2 trid->trsvcid: 8009 00:33:07.725 [2024-07-22 18:40:19.686548] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:33:07.725 [2024-07-22 18:40:19.686609] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:33:07.725 [2024-07-22 18:40:19.686698] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:33:07.983 [2024-07-22 18:40:19.772861] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new subsystem mdns0_nvme0 00:33:07.983 [2024-07-22 18:40:19.843364] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:33:07.983 [2024-07-22 18:40:19.843432] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:33:07.983 [2024-07-22 18:40:19.886030] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:33:07.983 [2024-07-22 18:40:19.886066] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:33:07.983 [2024-07-22 18:40:19.886114] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:07.983 [2024-07-22 18:40:19.972240] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem mdns1_nvme0 00:33:08.241 [2024-07-22 18:40:20.042872] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:33:08.241 [2024-07-22 18:40:20.042974] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:33:11.524 18:40:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@185 -- # get_mdns_discovery_svcs 00:33:11.524 18:40:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:33:11.524 18:40:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:33:11.524 18:40:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:33:11.524 18:40:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.524 18:40:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:33:11.524 18:40:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:11.524 18:40:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:11.524 18:40:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@185 -- # [[ mdns == \m\d\n\s ]] 00:33:11.524 18:40:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@186 -- # get_discovery_ctrlrs 00:33:11.524 18:40:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:33:11.524 18:40:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.524 18:40:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:33:11.524 18:40:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:11.524 18:40:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:33:11.524 18:40:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:33:11.524 18:40:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:11.525 18:40:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@186 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:33:11.525 18:40:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@187 -- # get_bdev_list 00:33:11.525 18:40:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:11.525 18:40:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.525 18:40:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:11.525 18:40:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:33:11.525 18:40:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:33:11.525 18:40:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:33:11.525 18:40:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:11.525 18:40:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@187 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:33:11.525 18:40:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@190 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:33:11.525 18:40:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@648 -- # local es=0 00:33:11.525 18:40:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:33:11.525 18:40:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:33:11.525 18:40:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:11.525 18:40:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:33:11.525 18:40:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:11.525 18:40:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:33:11.525 18:40:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.525 18:40:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:11.525 [2024-07-22 18:40:23.073212] bdev_mdns_client.c: 475:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running for service _nvme-disc._tcp 00:33:11.525 2024/07/22 18:40:23 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:cdc svcname:_nvme-disc._tcp], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:33:11.525 request: 00:33:11.525 { 00:33:11.525 "method": "bdev_nvme_start_mdns_discovery", 00:33:11.525 "params": { 00:33:11.525 "name": "cdc", 00:33:11.525 "svcname": "_nvme-disc._tcp", 00:33:11.525 "hostnqn": "nqn.2021-12.io.spdk:test" 00:33:11.525 } 00:33:11.525 } 00:33:11.525 Got JSON-RPC error response 00:33:11.525 GoRPCClient: error on JSON-RPC call 00:33:11.525 18:40:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:33:11.525 18:40:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@651 -- # es=1 00:33:11.525 18:40:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:11.525 18:40:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:11.525 18:40:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:11.525 18:40:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@191 -- # get_discovery_ctrlrs 00:33:11.525 18:40:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:33:11.525 18:40:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:33:11.525 18:40:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.525 18:40:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:33:11.525 18:40:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:11.525 18:40:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:33:11.525 18:40:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:11.525 18:40:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@191 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:33:11.525 18:40:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@192 -- # get_bdev_list 00:33:11.525 18:40:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:11.525 18:40:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.525 18:40:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:33:11.525 18:40:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:33:11.525 18:40:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:11.525 18:40:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:33:11.525 18:40:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:11.525 18:40:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@192 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:33:11.525 18:40:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@193 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:33:11.525 18:40:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.525 18:40:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:11.525 18:40:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:11.525 18:40:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@195 -- # rpc_cmd nvmf_stop_mdns_prr 00:33:11.525 18:40:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.525 18:40:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:11.525 18:40:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:11.525 18:40:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@197 -- # trap - SIGINT SIGTERM EXIT 00:33:11.525 18:40:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@199 -- # kill 105584 00:33:11.525 18:40:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@202 -- # wait 105584 00:33:11.525 [2024-07-22 18:40:23.475154] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:33:12.496 18:40:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@203 -- # kill 105613 00:33:12.496 Got SIGTERM, quitting. 00:33:12.496 18:40:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@204 -- # nvmftestfini 00:33:12.496 18:40:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:12.496 18:40:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@117 -- # sync 00:33:12.496 Leaving mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:33:12.496 Leaving mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:33:12.496 avahi-daemon 0.8 exiting. 00:33:12.496 18:40:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:12.496 18:40:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@120 -- # set +e 00:33:12.496 18:40:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:12.496 18:40:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:12.496 rmmod nvme_tcp 00:33:12.496 rmmod nvme_fabrics 00:33:12.496 rmmod nvme_keyring 00:33:12.754 18:40:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:12.754 18:40:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@124 -- # set -e 00:33:12.754 18:40:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@125 -- # return 0 00:33:12.754 18:40:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@489 -- # '[' -n 105534 ']' 00:33:12.754 18:40:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@490 -- # killprocess 105534 00:33:12.755 18:40:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@948 -- # '[' -z 105534 ']' 00:33:12.755 18:40:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@952 -- # kill -0 105534 00:33:12.755 18:40:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@953 -- # uname 00:33:12.755 18:40:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:12.755 18:40:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 105534 00:33:12.755 killing process with pid 105534 00:33:12.755 18:40:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:12.755 18:40:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:12.755 18:40:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 105534' 00:33:12.755 18:40:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@967 -- # kill 105534 00:33:12.755 18:40:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@972 -- # wait 105534 00:33:14.130 18:40:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:14.130 18:40:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:14.130 18:40:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:14.130 18:40:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:14.130 18:40:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:14.130 18:40:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:14.130 18:40:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:14.130 18:40:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:14.130 18:40:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:33:14.130 00:33:14.130 real 0m22.967s 00:33:14.130 user 0m43.282s 00:33:14.130 sys 0m2.414s 00:33:14.130 18:40:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:14.130 18:40:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:14.130 ************************************ 00:33:14.130 END TEST nvmf_mdns_discovery 00:33:14.130 ************************************ 00:33:14.130 18:40:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:33:14.130 18:40:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 1 -eq 1 ]] 00:33:14.130 18:40:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@42 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:33:14.130 18:40:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:33:14.130 18:40:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:14.130 18:40:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.130 ************************************ 00:33:14.130 START TEST nvmf_host_multipath 00:33:14.130 ************************************ 00:33:14.130 18:40:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:33:14.130 * Looking for test storage... 00:33:14.130 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:33:14.130 18:40:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:33:14.130 18:40:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:33:14.130 18:40:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:14.130 18:40:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:14.130 18:40:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:14.130 18:40:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:14.130 18:40:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:14.130 18:40:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:14.130 18:40:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:14.130 18:40:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:14.130 18:40:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:14.130 18:40:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:14.130 18:40:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:33:14.130 18:40:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=0b8484e2-e129-4a11-8748-0b3c728771da 00:33:14.130 18:40:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:14.130 18:40:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:14.130 18:40:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:33:14.130 18:40:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:14.130 18:40:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:33:14.130 18:40:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:14.130 18:40:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:14.130 18:40:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:14.130 18:40:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:14.130 18:40:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:14.131 18:40:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:14.131 18:40:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:33:14.131 18:40:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:14.131 18:40:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@47 -- # : 0 00:33:14.131 18:40:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:14.131 18:40:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:14.131 18:40:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:14.131 18:40:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:14.131 18:40:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:14.131 18:40:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:14.131 18:40:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:14.131 18:40:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:14.131 18:40:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:14.131 18:40:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:14.131 18:40:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:14.131 18:40:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:33:14.131 18:40:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:33:14.131 18:40:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:33:14.131 18:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:33:14.131 18:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:14.131 18:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:14.131 18:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:14.131 18:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:14.131 18:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:14.131 18:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:14.131 18:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:14.131 18:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:14.131 18:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:33:14.131 18:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:33:14.131 18:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:33:14.131 18:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:33:14.131 18:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:33:14.131 18:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:33:14.131 18:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:14.131 18:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:14.131 18:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:33:14.131 18:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:33:14.131 18:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:33:14.131 18:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:33:14.131 18:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:33:14.131 18:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:14.131 18:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:33:14.131 18:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:33:14.131 18:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:33:14.131 18:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:33:14.131 18:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:33:14.131 18:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:33:14.131 Cannot find device "nvmf_tgt_br" 00:33:14.131 18:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@155 -- # true 00:33:14.131 18:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:33:14.131 Cannot find device "nvmf_tgt_br2" 00:33:14.131 18:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@156 -- # true 00:33:14.131 18:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:33:14.131 18:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:33:14.131 Cannot find device "nvmf_tgt_br" 00:33:14.131 18:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@158 -- # true 00:33:14.131 18:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:33:14.131 Cannot find device "nvmf_tgt_br2" 00:33:14.131 18:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@159 -- # true 00:33:14.131 18:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:33:14.131 18:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:33:14.390 18:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:33:14.390 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:33:14.390 18:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:33:14.390 18:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:33:14.390 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:33:14.390 18:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:33:14.390 18:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:33:14.390 18:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:33:14.390 18:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:33:14.390 18:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:33:14.390 18:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:33:14.390 18:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:33:14.390 18:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:33:14.390 18:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:33:14.390 18:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:33:14.390 18:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:33:14.390 18:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:33:14.390 18:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:33:14.390 18:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:33:14.390 18:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:33:14.390 18:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:33:14.390 18:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:33:14.390 18:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:33:14.390 18:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:33:14.390 18:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:33:14.390 18:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:33:14.390 18:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:33:14.390 18:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:33:14.390 18:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:33:14.390 18:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:33:14.390 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:14.390 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:33:14.390 00:33:14.390 --- 10.0.0.2 ping statistics --- 00:33:14.390 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:14.390 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:33:14.390 18:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:33:14.390 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:33:14.390 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:33:14.390 00:33:14.390 --- 10.0.0.3 ping statistics --- 00:33:14.390 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:14.390 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:33:14.390 18:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:33:14.390 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:14.390 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:33:14.390 00:33:14.390 --- 10.0.0.1 ping statistics --- 00:33:14.390 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:14.390 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:33:14.390 18:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:14.390 18:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@433 -- # return 0 00:33:14.390 18:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:14.390 18:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:14.390 18:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:14.390 18:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:14.390 18:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:14.390 18:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:14.390 18:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:14.390 18:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:33:14.390 18:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:14.390 18:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:14.390 18:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:33:14.390 18:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@481 -- # nvmfpid=106196 00:33:14.390 18:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@482 -- # waitforlisten 106196 00:33:14.390 18:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@829 -- # '[' -z 106196 ']' 00:33:14.390 18:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:14.390 18:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:33:14.390 18:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:14.390 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:14.390 18:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:14.390 18:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:14.390 18:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:33:14.648 [2024-07-22 18:40:26.477741] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:33:14.648 [2024-07-22 18:40:26.477952] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:14.648 [2024-07-22 18:40:26.651063] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:15.214 [2024-07-22 18:40:26.974231] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:15.214 [2024-07-22 18:40:26.974313] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:15.214 [2024-07-22 18:40:26.974332] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:15.214 [2024-07-22 18:40:26.974349] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:15.214 [2024-07-22 18:40:26.974362] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:15.214 [2024-07-22 18:40:26.974596] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:15.214 [2024-07-22 18:40:26.974611] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:15.471 18:40:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:15.472 18:40:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@862 -- # return 0 00:33:15.472 18:40:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:15.472 18:40:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:15.472 18:40:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:33:15.472 18:40:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:15.472 18:40:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=106196 00:33:15.472 18:40:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:33:15.729 [2024-07-22 18:40:27.701635] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:15.729 18:40:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:33:16.295 Malloc0 00:33:16.295 18:40:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:33:16.553 18:40:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:16.810 18:40:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:17.068 [2024-07-22 18:40:28.918375] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:17.068 18:40:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:17.325 [2024-07-22 18:40:29.170567] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:17.325 18:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:33:17.325 18:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=106294 00:33:17.325 18:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:33:17.325 18:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 106294 /var/tmp/bdevperf.sock 00:33:17.325 18:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@829 -- # '[' -z 106294 ']' 00:33:17.325 18:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:17.325 18:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:17.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:17.325 18:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:17.325 18:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:17.325 18:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:33:18.258 18:40:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:18.258 18:40:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@862 -- # return 0 00:33:18.258 18:40:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:33:18.516 18:40:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:33:19.082 Nvme0n1 00:33:19.082 18:40:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:33:19.340 Nvme0n1 00:33:19.340 18:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:33:19.340 18:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:33:20.278 18:40:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:33:20.278 18:40:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:20.844 18:40:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:20.844 18:40:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:33:20.844 18:40:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=106387 00:33:20.844 18:40:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 106196 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:33:20.844 18:40:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:33:27.405 18:40:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:33:27.405 18:40:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:33:27.405 18:40:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:33:27.405 18:40:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:33:27.405 Attaching 4 probes... 00:33:27.405 @path[10.0.0.2, 4421]: 13317 00:33:27.405 @path[10.0.0.2, 4421]: 13700 00:33:27.405 @path[10.0.0.2, 4421]: 13520 00:33:27.405 @path[10.0.0.2, 4421]: 13158 00:33:27.405 @path[10.0.0.2, 4421]: 12873 00:33:27.405 18:40:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:33:27.405 18:40:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:33:27.405 18:40:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:33:27.405 18:40:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:33:27.405 18:40:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:33:27.405 18:40:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:33:27.405 18:40:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 106387 00:33:27.405 18:40:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:33:27.405 18:40:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:33:27.405 18:40:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:27.405 18:40:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:33:27.664 18:40:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:33:27.664 18:40:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=106517 00:33:27.664 18:40:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:33:27.664 18:40:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 106196 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:33:34.226 18:40:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:33:34.226 18:40:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:33:34.226 18:40:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:33:34.226 18:40:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:33:34.226 Attaching 4 probes... 00:33:34.226 @path[10.0.0.2, 4420]: 12125 00:33:34.226 @path[10.0.0.2, 4420]: 11967 00:33:34.226 @path[10.0.0.2, 4420]: 12012 00:33:34.226 @path[10.0.0.2, 4420]: 13205 00:33:34.226 @path[10.0.0.2, 4420]: 12299 00:33:34.226 18:40:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:33:34.226 18:40:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:33:34.226 18:40:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:33:34.226 18:40:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:33:34.226 18:40:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:33:34.226 18:40:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:33:34.226 18:40:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 106517 00:33:34.226 18:40:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:33:34.226 18:40:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:33:34.226 18:40:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:33:34.484 18:40:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:34.743 18:40:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:33:34.743 18:40:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 106196 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:33:34.743 18:40:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=106642 00:33:34.743 18:40:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:33:41.306 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:33:41.306 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:33:41.306 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:33:41.306 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:33:41.306 Attaching 4 probes... 00:33:41.306 @path[10.0.0.2, 4421]: 10170 00:33:41.306 @path[10.0.0.2, 4421]: 12826 00:33:41.306 @path[10.0.0.2, 4421]: 13177 00:33:41.306 @path[10.0.0.2, 4421]: 11695 00:33:41.306 @path[10.0.0.2, 4421]: 12784 00:33:41.306 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:33:41.306 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:33:41.306 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:33:41.306 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:33:41.306 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:33:41.306 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:33:41.306 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 106642 00:33:41.306 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:33:41.306 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:33:41.307 18:40:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:33:41.307 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:33:41.564 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:33:41.564 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=106769 00:33:41.564 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 106196 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:33:41.564 18:40:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:33:48.124 18:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:33:48.124 18:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:33:48.124 18:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:33:48.124 18:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:33:48.124 Attaching 4 probes... 00:33:48.124 00:33:48.124 00:33:48.124 00:33:48.124 00:33:48.124 00:33:48.124 18:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:33:48.124 18:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:33:48.124 18:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:33:48.124 18:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:33:48.124 18:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:33:48.124 18:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:33:48.125 18:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 106769 00:33:48.125 18:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:33:48.125 18:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:33:48.125 18:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:48.125 18:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:48.383 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:33:48.383 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=106901 00:33:48.383 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 106196 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:33:48.384 18:41:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:33:54.968 18:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:33:54.968 18:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:33:54.968 18:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:33:54.968 18:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:33:54.968 Attaching 4 probes... 00:33:54.968 @path[10.0.0.2, 4421]: 12135 00:33:54.968 @path[10.0.0.2, 4421]: 12310 00:33:54.968 @path[10.0.0.2, 4421]: 12367 00:33:54.968 @path[10.0.0.2, 4421]: 12515 00:33:54.968 @path[10.0.0.2, 4421]: 12526 00:33:54.968 18:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:33:54.968 18:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:33:54.968 18:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:33:54.968 18:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:33:54.968 18:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:33:54.968 18:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:33:54.968 18:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 106901 00:33:54.968 18:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:33:54.968 18:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:54.968 [2024-07-22 18:41:06.787779] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:54.969 [2024-07-22 18:41:06.787868] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:54.969 [2024-07-22 18:41:06.787886] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:54.969 [2024-07-22 18:41:06.787900] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:54.969 [2024-07-22 18:41:06.787913] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:54.969 [2024-07-22 18:41:06.787927] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:54.969 [2024-07-22 18:41:06.787941] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:54.969 [2024-07-22 18:41:06.787953] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:54.969 [2024-07-22 18:41:06.787965] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:54.969 [2024-07-22 18:41:06.787978] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:54.969 [2024-07-22 18:41:06.787991] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:54.969 [2024-07-22 18:41:06.788004] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:54.969 [2024-07-22 18:41:06.788017] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:54.969 [2024-07-22 18:41:06.788032] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:54.969 [2024-07-22 18:41:06.788056] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:54.969 [2024-07-22 18:41:06.788069] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:54.969 [2024-07-22 18:41:06.788082] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:54.969 [2024-07-22 18:41:06.788095] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:54.969 [2024-07-22 18:41:06.788107] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:54.969 [2024-07-22 18:41:06.788120] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:54.969 [2024-07-22 18:41:06.788133] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:54.969 [2024-07-22 18:41:06.788146] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:54.969 [2024-07-22 18:41:06.788160] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:54.969 [2024-07-22 18:41:06.788173] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:54.969 [2024-07-22 18:41:06.788186] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:54.969 [2024-07-22 18:41:06.788199] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:54.969 [2024-07-22 18:41:06.788211] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:54.969 [2024-07-22 18:41:06.788224] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:54.969 [2024-07-22 18:41:06.788237] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:54.969 [2024-07-22 18:41:06.788251] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:54.969 [2024-07-22 18:41:06.788263] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:54.969 [2024-07-22 18:41:06.788277] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:54.969 [2024-07-22 18:41:06.788289] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:54.969 [2024-07-22 18:41:06.788303] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:54.969 [2024-07-22 18:41:06.788316] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:54.969 [2024-07-22 18:41:06.788329] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:54.969 [2024-07-22 18:41:06.788892] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:54.969 [2024-07-22 18:41:06.788914] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:54.969 [2024-07-22 18:41:06.788934] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:54.969 [2024-07-22 18:41:06.788947] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:54.969 [2024-07-22 18:41:06.788960] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:54.969 [2024-07-22 18:41:06.788973] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:54.969 [2024-07-22 18:41:06.788985] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:54.969 [2024-07-22 18:41:06.788998] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:54.969 [2024-07-22 18:41:06.789012] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:54.969 18:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:33:55.905 18:41:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:33:55.905 18:41:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=107026 00:33:55.905 18:41:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 106196 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:33:55.905 18:41:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:34:02.496 18:41:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:34:02.496 18:41:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:34:02.496 18:41:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:34:02.496 18:41:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:34:02.496 Attaching 4 probes... 00:34:02.496 @path[10.0.0.2, 4420]: 12002 00:34:02.496 @path[10.0.0.2, 4420]: 12087 00:34:02.496 @path[10.0.0.2, 4420]: 12593 00:34:02.496 @path[10.0.0.2, 4420]: 12431 00:34:02.496 @path[10.0.0.2, 4420]: 11948 00:34:02.496 18:41:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:34:02.496 18:41:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:34:02.496 18:41:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:34:02.496 18:41:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:34:02.496 18:41:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:34:02.496 18:41:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:34:02.496 18:41:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 107026 00:34:02.496 18:41:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:34:02.496 18:41:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:34:02.496 [2024-07-22 18:41:14.442322] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:34:02.496 18:41:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:34:02.754 18:41:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:34:09.348 18:41:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:34:09.348 18:41:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=107214 00:34:09.348 18:41:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 106196 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:34:09.348 18:41:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:34:15.912 18:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:34:15.912 18:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:34:15.912 18:41:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:34:15.912 18:41:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:34:15.912 Attaching 4 probes... 00:34:15.912 @path[10.0.0.2, 4421]: 11403 00:34:15.912 @path[10.0.0.2, 4421]: 11617 00:34:15.912 @path[10.0.0.2, 4421]: 11622 00:34:15.912 @path[10.0.0.2, 4421]: 11835 00:34:15.912 @path[10.0.0.2, 4421]: 10773 00:34:15.912 18:41:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:34:15.912 18:41:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:34:15.912 18:41:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:34:15.912 18:41:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:34:15.912 18:41:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:34:15.912 18:41:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:34:15.912 18:41:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 107214 00:34:15.912 18:41:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:34:15.912 18:41:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 106294 00:34:15.912 18:41:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@948 -- # '[' -z 106294 ']' 00:34:15.912 18:41:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@952 -- # kill -0 106294 00:34:15.912 18:41:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@953 -- # uname 00:34:15.912 18:41:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:15.912 18:41:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 106294 00:34:15.912 18:41:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:34:15.912 18:41:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:34:15.912 killing process with pid 106294 00:34:15.912 18:41:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 106294' 00:34:15.912 18:41:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@967 -- # kill 106294 00:34:15.912 18:41:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # wait 106294 00:34:15.912 Connection closed with partial response: 00:34:15.912 00:34:15.912 00:34:16.522 18:41:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 106294 00:34:16.522 18:41:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:34:16.522 [2024-07-22 18:40:29.294148] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:34:16.522 [2024-07-22 18:40:29.294391] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106294 ] 00:34:16.522 [2024-07-22 18:40:29.465678] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:16.522 [2024-07-22 18:40:29.739499] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:34:16.522 Running I/O for 90 seconds... 00:34:16.522 [2024-07-22 18:40:39.644644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:49344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.522 [2024-07-22 18:40:39.644766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:16.522 [2024-07-22 18:40:39.644937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:49352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.522 [2024-07-22 18:40:39.644977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:16.522 [2024-07-22 18:40:39.645022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:49360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.522 [2024-07-22 18:40:39.645052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:16.522 [2024-07-22 18:40:39.645091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:49368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.522 [2024-07-22 18:40:39.645121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:16.522 [2024-07-22 18:40:39.645160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:49376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.522 [2024-07-22 18:40:39.645188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:16.522 [2024-07-22 18:40:39.645242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:49384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.522 [2024-07-22 18:40:39.645268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.522 [2024-07-22 18:40:39.645306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:49392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.522 [2024-07-22 18:40:39.645333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:16.522 [2024-07-22 18:40:39.645371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:49400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.522 [2024-07-22 18:40:39.645398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:16.522 [2024-07-22 18:40:39.645754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:49408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.522 [2024-07-22 18:40:39.645803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:16.522 [2024-07-22 18:40:39.645852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:49416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.522 [2024-07-22 18:40:39.645912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:16.522 [2024-07-22 18:40:39.645958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:49424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.522 [2024-07-22 18:40:39.646035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:16.522 [2024-07-22 18:40:39.646083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:49432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.522 [2024-07-22 18:40:39.646113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:16.522 [2024-07-22 18:40:39.646163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:49440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.522 [2024-07-22 18:40:39.646190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:16.522 [2024-07-22 18:40:39.646228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:49448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.522 [2024-07-22 18:40:39.646256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:16.522 [2024-07-22 18:40:39.646295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:49456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.522 [2024-07-22 18:40:39.646323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:16.523 [2024-07-22 18:40:39.646362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:49464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.523 [2024-07-22 18:40:39.646389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:16.523 [2024-07-22 18:40:39.646429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:49104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.523 [2024-07-22 18:40:39.646456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:16.523 [2024-07-22 18:40:39.646497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:49112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.523 [2024-07-22 18:40:39.646525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:16.523 [2024-07-22 18:40:39.646564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:49120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.523 [2024-07-22 18:40:39.646591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:16.523 [2024-07-22 18:40:39.646629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:49128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.523 [2024-07-22 18:40:39.646657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:16.523 [2024-07-22 18:40:39.646702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:49136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.523 [2024-07-22 18:40:39.646728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:16.523 [2024-07-22 18:40:39.646779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:49144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.523 [2024-07-22 18:40:39.646826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:16.523 [2024-07-22 18:40:39.646914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:49152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.523 [2024-07-22 18:40:39.646963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:16.523 [2024-07-22 18:40:39.647030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:49160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.523 [2024-07-22 18:40:39.647067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:16.523 [2024-07-22 18:40:39.647106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:49168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.523 [2024-07-22 18:40:39.647135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:16.523 [2024-07-22 18:40:39.647188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:49176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.523 [2024-07-22 18:40:39.647220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:16.523 [2024-07-22 18:40:39.647270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:49184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.523 [2024-07-22 18:40:39.647300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:16.523 [2024-07-22 18:40:39.647341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:49192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.523 [2024-07-22 18:40:39.647369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:16.523 [2024-07-22 18:40:39.647408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:49200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.523 [2024-07-22 18:40:39.647436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:16.523 [2024-07-22 18:40:39.647475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:49208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.523 [2024-07-22 18:40:39.647503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:16.523 [2024-07-22 18:40:39.647542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:49216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.523 [2024-07-22 18:40:39.647571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:16.523 [2024-07-22 18:40:39.651792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:49472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.523 [2024-07-22 18:40:39.651867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:16.523 [2024-07-22 18:40:39.651927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:49480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.523 [2024-07-22 18:40:39.651960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:16.523 [2024-07-22 18:40:39.652002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:49488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.523 [2024-07-22 18:40:39.652031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:16.523 [2024-07-22 18:40:39.652085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:49496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.523 [2024-07-22 18:40:39.652117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:16.523 [2024-07-22 18:40:39.652187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:49504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.523 [2024-07-22 18:40:39.652218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:16.523 [2024-07-22 18:40:39.652273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:49512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.523 [2024-07-22 18:40:39.652304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:16.523 [2024-07-22 18:40:39.652352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:49520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.523 [2024-07-22 18:40:39.652382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:16.523 [2024-07-22 18:40:39.652430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:49528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.523 [2024-07-22 18:40:39.652459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:16.523 [2024-07-22 18:40:39.652507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:49536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.523 [2024-07-22 18:40:39.652536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:16.523 [2024-07-22 18:40:39.652576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:49544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.523 [2024-07-22 18:40:39.652603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:16.523 [2024-07-22 18:40:39.652643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:49552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.523 [2024-07-22 18:40:39.652671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:16.523 [2024-07-22 18:40:39.652711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:49560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.523 [2024-07-22 18:40:39.652739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:16.523 [2024-07-22 18:40:39.652778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:49568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.523 [2024-07-22 18:40:39.652812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:16.523 [2024-07-22 18:40:39.652869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:49576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.523 [2024-07-22 18:40:39.652901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:16.523 [2024-07-22 18:40:39.652940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:49584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.523 [2024-07-22 18:40:39.652968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:16.523 [2024-07-22 18:40:39.653036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:49592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.523 [2024-07-22 18:40:39.653065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:16.523 [2024-07-22 18:40:39.653115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:49600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.523 [2024-07-22 18:40:39.653158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:16.523 [2024-07-22 18:40:39.653200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:49608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.523 [2024-07-22 18:40:39.653229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:16.523 [2024-07-22 18:40:39.653267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:49616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.523 [2024-07-22 18:40:39.653295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:16.523 [2024-07-22 18:40:39.653335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:49624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.523 [2024-07-22 18:40:39.653362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:16.523 [2024-07-22 18:40:39.653401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:49632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.523 [2024-07-22 18:40:39.653428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:16.523 [2024-07-22 18:40:39.653467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:49640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.523 [2024-07-22 18:40:39.653495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:16.523 [2024-07-22 18:40:39.653534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:49648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.524 [2024-07-22 18:40:39.653561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:16.524 [2024-07-22 18:40:39.653600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:49656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.524 [2024-07-22 18:40:39.653627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:16.524 [2024-07-22 18:40:39.653666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:49224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.524 [2024-07-22 18:40:39.653693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:16.524 [2024-07-22 18:40:39.653731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:49232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.524 [2024-07-22 18:40:39.653759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:16.524 [2024-07-22 18:40:39.653799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:49240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.524 [2024-07-22 18:40:39.653826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:16.524 [2024-07-22 18:40:39.653886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:49248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.524 [2024-07-22 18:40:39.653916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:16.524 [2024-07-22 18:40:39.653955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:49256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.524 [2024-07-22 18:40:39.653997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:16.524 [2024-07-22 18:40:39.654067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:49264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.524 [2024-07-22 18:40:39.654096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:16.524 [2024-07-22 18:40:39.654137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:49272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.524 [2024-07-22 18:40:39.654165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:16.524 [2024-07-22 18:40:39.654206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:49280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.524 [2024-07-22 18:40:39.654234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:16.524 [2024-07-22 18:40:39.654273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:49288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.524 [2024-07-22 18:40:39.654300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:16.524 [2024-07-22 18:40:39.654340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:49296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.524 [2024-07-22 18:40:39.654375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:16.524 [2024-07-22 18:40:39.654414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:49304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.524 [2024-07-22 18:40:39.654441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:16.524 [2024-07-22 18:40:39.654489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:49312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.524 [2024-07-22 18:40:39.654517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:16.524 [2024-07-22 18:40:39.654556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:49320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.524 [2024-07-22 18:40:39.654583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:16.524 [2024-07-22 18:40:39.654622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:49328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.524 [2024-07-22 18:40:39.654650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:16.524 [2024-07-22 18:40:39.654690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:49336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.524 [2024-07-22 18:40:39.654718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:16.524 [2024-07-22 18:40:46.249544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:112592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.524 [2024-07-22 18:40:46.249650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:16.524 [2024-07-22 18:40:46.249762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:112656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.524 [2024-07-22 18:40:46.249800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:16.524 [2024-07-22 18:40:46.249903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:112664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.524 [2024-07-22 18:40:46.249936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:16.524 [2024-07-22 18:40:46.249977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:112672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.524 [2024-07-22 18:40:46.250018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:16.524 [2024-07-22 18:40:46.250066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:112680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.524 [2024-07-22 18:40:46.250095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:16.524 [2024-07-22 18:40:46.250136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:112688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.524 [2024-07-22 18:40:46.250164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:16.524 [2024-07-22 18:40:46.250202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:112696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.524 [2024-07-22 18:40:46.250229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:16.524 [2024-07-22 18:40:46.250267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:112704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.524 [2024-07-22 18:40:46.250295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:16.524 [2024-07-22 18:40:46.250333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:112712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.524 [2024-07-22 18:40:46.250360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:16.524 [2024-07-22 18:40:46.250400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:112720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.524 [2024-07-22 18:40:46.250427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:16.524 [2024-07-22 18:40:46.250466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:112728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.524 [2024-07-22 18:40:46.250493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:16.524 [2024-07-22 18:40:46.250530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:112736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.524 [2024-07-22 18:40:46.250558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:16.524 [2024-07-22 18:40:46.250596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:112744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.524 [2024-07-22 18:40:46.250624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:16.524 [2024-07-22 18:40:46.250663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:112752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.524 [2024-07-22 18:40:46.250690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:16.524 [2024-07-22 18:40:46.250741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:112760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.524 [2024-07-22 18:40:46.250770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:16.524 [2024-07-22 18:40:46.250809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:112768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.524 [2024-07-22 18:40:46.250852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:16.524 [2024-07-22 18:40:46.250897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:112776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.524 [2024-07-22 18:40:46.250925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:16.524 [2024-07-22 18:40:46.250963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:112784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.524 [2024-07-22 18:40:46.250990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:16.524 [2024-07-22 18:40:46.251028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:112792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.524 [2024-07-22 18:40:46.251055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:16.524 [2024-07-22 18:40:46.251092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:112800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.524 [2024-07-22 18:40:46.251120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:16.524 [2024-07-22 18:40:46.251157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:112808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.524 [2024-07-22 18:40:46.251186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:16.524 [2024-07-22 18:40:46.251224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:112816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.524 [2024-07-22 18:40:46.251252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:16.524 [2024-07-22 18:40:46.251288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.525 [2024-07-22 18:40:46.251316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:16.525 [2024-07-22 18:40:46.251354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:112832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.525 [2024-07-22 18:40:46.251381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:16.525 [2024-07-22 18:40:46.251419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:112840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.525 [2024-07-22 18:40:46.251446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:16.525 [2024-07-22 18:40:46.251484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:112848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.525 [2024-07-22 18:40:46.251511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:16.525 [2024-07-22 18:40:46.251548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:112856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.525 [2024-07-22 18:40:46.251590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:16.525 [2024-07-22 18:40:46.251632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:112864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.525 [2024-07-22 18:40:46.251661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:16.525 [2024-07-22 18:40:46.251699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:112872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.525 [2024-07-22 18:40:46.251727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:16.525 [2024-07-22 18:40:46.251768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:112880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.525 [2024-07-22 18:40:46.251796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:16.525 [2024-07-22 18:40:46.251849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:112888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.525 [2024-07-22 18:40:46.251881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:16.525 [2024-07-22 18:40:46.251921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:112896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.525 [2024-07-22 18:40:46.251949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:16.525 [2024-07-22 18:40:46.251988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:112904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.525 [2024-07-22 18:40:46.252015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:16.525 [2024-07-22 18:40:46.252054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:112912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.525 [2024-07-22 18:40:46.252082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:16.525 [2024-07-22 18:40:46.252120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:112920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.525 [2024-07-22 18:40:46.252148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:16.525 [2024-07-22 18:40:46.252186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:112928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.525 [2024-07-22 18:40:46.252213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:16.525 [2024-07-22 18:40:46.252252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:112936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.525 [2024-07-22 18:40:46.252281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:16.525 [2024-07-22 18:40:46.252319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:112944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.525 [2024-07-22 18:40:46.252347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:16.525 [2024-07-22 18:40:46.252385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:112952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.525 [2024-07-22 18:40:46.252424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:16.525 [2024-07-22 18:40:46.252465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:112960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.525 [2024-07-22 18:40:46.252494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:16.525 [2024-07-22 18:40:46.252560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:112968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.525 [2024-07-22 18:40:46.252589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:16.525 [2024-07-22 18:40:46.252628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:112976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.525 [2024-07-22 18:40:46.252656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:16.525 [2024-07-22 18:40:46.252703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:112984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.525 [2024-07-22 18:40:46.252730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:16.525 [2024-07-22 18:40:46.252769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:112992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.525 [2024-07-22 18:40:46.252796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:16.525 [2024-07-22 18:40:46.252854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:113000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.525 [2024-07-22 18:40:46.252886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:16.525 [2024-07-22 18:40:46.252926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:113008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.525 [2024-07-22 18:40:46.252954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:16.525 [2024-07-22 18:40:46.252993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:113016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.525 [2024-07-22 18:40:46.253022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:16.525 [2024-07-22 18:40:46.253061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:113024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.525 [2024-07-22 18:40:46.253089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:16.525 [2024-07-22 18:40:46.253129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:113032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.525 [2024-07-22 18:40:46.253157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:16.525 [2024-07-22 18:40:46.253197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:113040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.525 [2024-07-22 18:40:46.253225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:16.525 [2024-07-22 18:40:46.253264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:113048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.525 [2024-07-22 18:40:46.253306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:16.525 [2024-07-22 18:40:46.253782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:113056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.525 [2024-07-22 18:40:46.253824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:16.525 [2024-07-22 18:40:46.253911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:113064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.525 [2024-07-22 18:40:46.253944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:16.525 [2024-07-22 18:40:46.253989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:113072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.525 [2024-07-22 18:40:46.254034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:16.525 [2024-07-22 18:40:46.254080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:113080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.525 [2024-07-22 18:40:46.254110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:16.525 [2024-07-22 18:40:46.254154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:113088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.525 [2024-07-22 18:40:46.254182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:16.525 [2024-07-22 18:40:46.254225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:113096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.525 [2024-07-22 18:40:46.254254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:16.525 [2024-07-22 18:40:46.254297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:113104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.525 [2024-07-22 18:40:46.254325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:16.525 [2024-07-22 18:40:46.254367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:113112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.525 [2024-07-22 18:40:46.254406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:16.525 [2024-07-22 18:40:46.254449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:113120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.525 [2024-07-22 18:40:46.254478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:16.525 [2024-07-22 18:40:46.254521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:113128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.525 [2024-07-22 18:40:46.254549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:16.526 [2024-07-22 18:40:46.254592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:113136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.526 [2024-07-22 18:40:46.254620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:16.526 [2024-07-22 18:40:46.254661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:113144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.526 [2024-07-22 18:40:46.254690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:16.526 [2024-07-22 18:40:46.254748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:113152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.526 [2024-07-22 18:40:46.254779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:16.526 [2024-07-22 18:40:46.254821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:113160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.526 [2024-07-22 18:40:46.254869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:16.526 [2024-07-22 18:40:46.254931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:113168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.526 [2024-07-22 18:40:46.254961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:16.526 [2024-07-22 18:40:46.255003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:113176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.526 [2024-07-22 18:40:46.255031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:16.526 [2024-07-22 18:40:46.255072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:113184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.526 [2024-07-22 18:40:46.255100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:16.526 [2024-07-22 18:40:46.255142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:113192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.526 [2024-07-22 18:40:46.255169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:16.526 [2024-07-22 18:40:46.255210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:113200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.526 [2024-07-22 18:40:46.255238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:16.526 [2024-07-22 18:40:46.255278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:113208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.526 [2024-07-22 18:40:46.255307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:16.526 [2024-07-22 18:40:46.255348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:113216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.526 [2024-07-22 18:40:46.255375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:16.526 [2024-07-22 18:40:46.255416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:113224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.526 [2024-07-22 18:40:46.255444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:16.526 [2024-07-22 18:40:46.255504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:113232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.526 [2024-07-22 18:40:46.255532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:16.526 [2024-07-22 18:40:46.255574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:113240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.526 [2024-07-22 18:40:46.255613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:16.526 [2024-07-22 18:40:46.255668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:113248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.526 [2024-07-22 18:40:46.255699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:16.526 [2024-07-22 18:40:46.255743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:113256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.526 [2024-07-22 18:40:46.255772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:16.526 [2024-07-22 18:40:46.255814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:113264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.526 [2024-07-22 18:40:46.255842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:16.526 [2024-07-22 18:40:46.255900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:113272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.526 [2024-07-22 18:40:46.255933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:16.526 [2024-07-22 18:40:46.255976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:113280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.526 [2024-07-22 18:40:46.256016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:16.526 [2024-07-22 18:40:46.256061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:113288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.526 [2024-07-22 18:40:46.256090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:16.526 [2024-07-22 18:40:46.256141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:113296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.526 [2024-07-22 18:40:46.256170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:16.526 [2024-07-22 18:40:46.256212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:113304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.526 [2024-07-22 18:40:46.256242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:16.526 [2024-07-22 18:40:46.256285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:113312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.526 [2024-07-22 18:40:46.256313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:16.526 [2024-07-22 18:40:46.256355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:113320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.526 [2024-07-22 18:40:46.256384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:16.526 [2024-07-22 18:40:46.256425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:113328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.526 [2024-07-22 18:40:46.256454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:16.526 [2024-07-22 18:40:46.256496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:113336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.526 [2024-07-22 18:40:46.256525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:16.526 [2024-07-22 18:40:46.256568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:113344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.526 [2024-07-22 18:40:46.256607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:16.526 [2024-07-22 18:40:46.256664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:113352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.526 [2024-07-22 18:40:46.256693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:16.526 [2024-07-22 18:40:46.256737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:113360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.526 [2024-07-22 18:40:46.256766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:16.526 [2024-07-22 18:40:46.256809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:113368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.526 [2024-07-22 18:40:46.256855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:16.526 [2024-07-22 18:40:46.256905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:113376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.526 [2024-07-22 18:40:46.256935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:16.526 [2024-07-22 18:40:46.256978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:113384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.526 [2024-07-22 18:40:46.257007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:16.526 [2024-07-22 18:40:46.257051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:113392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.526 [2024-07-22 18:40:46.257080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:16.526 [2024-07-22 18:40:46.257122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:113400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.526 [2024-07-22 18:40:46.257150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:16.526 [2024-07-22 18:40:46.257192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:113408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.526 [2024-07-22 18:40:46.257225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:16.528 [2024-07-22 18:40:46.257267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:113416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.528 [2024-07-22 18:40:46.257296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:16.528 [2024-07-22 18:40:46.257337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:113424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.528 [2024-07-22 18:40:46.257365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:16.528 [2024-07-22 18:40:46.257407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:113432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.528 [2024-07-22 18:40:46.257436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:16.528 [2024-07-22 18:40:46.257478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:113440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.528 [2024-07-22 18:40:46.257517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:16.528 [2024-07-22 18:40:46.257561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:113448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.528 [2024-07-22 18:40:46.257589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:16.528 [2024-07-22 18:40:46.257632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:113456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.528 [2024-07-22 18:40:46.257661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:16.528 [2024-07-22 18:40:46.257703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:113464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.528 [2024-07-22 18:40:46.257732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:16.528 [2024-07-22 18:40:46.257774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:113472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.528 [2024-07-22 18:40:46.257802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:16.528 [2024-07-22 18:40:46.257885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:113480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.528 [2024-07-22 18:40:46.257916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:16.528 [2024-07-22 18:40:46.257958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:113488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.528 [2024-07-22 18:40:46.257987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:16.528 [2024-07-22 18:40:46.258044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:113496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.528 [2024-07-22 18:40:46.258075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:16.528 [2024-07-22 18:40:46.258118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:113504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.528 [2024-07-22 18:40:46.258146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:16.528 [2024-07-22 18:40:46.258189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:113512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.528 [2024-07-22 18:40:46.258217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:16.528 [2024-07-22 18:40:46.258259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:113520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.528 [2024-07-22 18:40:46.258288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:16.528 [2024-07-22 18:40:46.258331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:113528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.528 [2024-07-22 18:40:46.258359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:16.528 [2024-07-22 18:40:46.258411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:113536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.528 [2024-07-22 18:40:46.258439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:16.528 [2024-07-22 18:40:46.258495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:112600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.528 [2024-07-22 18:40:46.258525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:16.528 [2024-07-22 18:40:46.258568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:112608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.528 [2024-07-22 18:40:46.258596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:16.528 [2024-07-22 18:40:46.258638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:112616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.528 [2024-07-22 18:40:46.258666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:16.528 [2024-07-22 18:40:46.258708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:112624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.528 [2024-07-22 18:40:46.258737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:16.528 [2024-07-22 18:40:46.258778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:112632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.528 [2024-07-22 18:40:46.258806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:16.528 [2024-07-22 18:40:46.258869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:112640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.528 [2024-07-22 18:40:46.258902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:16.528 [2024-07-22 18:40:46.259166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:112648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.528 [2024-07-22 18:40:46.259203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:16.528 [2024-07-22 18:40:46.259260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:113544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.528 [2024-07-22 18:40:46.259290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:16.528 [2024-07-22 18:40:46.259356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:113552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.528 [2024-07-22 18:40:46.259385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:16.528 [2024-07-22 18:40:46.259430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:113560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.528 [2024-07-22 18:40:46.259476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:16.528 [2024-07-22 18:40:46.259551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:113568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.528 [2024-07-22 18:40:46.259580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:16.528 [2024-07-22 18:40:46.259629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:113576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.528 [2024-07-22 18:40:46.259658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.528 [2024-07-22 18:40:46.259721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:113584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.528 [2024-07-22 18:40:46.259752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:16.528 [2024-07-22 18:40:46.259800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:113592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.528 [2024-07-22 18:40:46.259828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:16.528 [2024-07-22 18:40:46.259876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:113600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.528 [2024-07-22 18:40:46.259937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:16.528 [2024-07-22 18:40:46.259989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:113608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.528 [2024-07-22 18:40:46.260019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:16.528 [2024-07-22 18:40:53.309329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:28888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.529 [2024-07-22 18:40:53.309439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:16.529 [2024-07-22 18:40:53.309496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:28896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.529 [2024-07-22 18:40:53.309524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:16.529 [2024-07-22 18:40:53.309557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:28904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.529 [2024-07-22 18:40:53.309581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:16.529 [2024-07-22 18:40:53.309612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:28912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.529 [2024-07-22 18:40:53.309634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:16.529 [2024-07-22 18:40:53.309665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:28920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.529 [2024-07-22 18:40:53.309686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:16.529 [2024-07-22 18:40:53.309717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.529 [2024-07-22 18:40:53.309739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:16.529 [2024-07-22 18:40:53.309770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:28936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.529 [2024-07-22 18:40:53.309822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:16.529 [2024-07-22 18:40:53.309869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:28944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.529 [2024-07-22 18:40:53.309894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:16.529 [2024-07-22 18:40:53.309948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:28952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.529 [2024-07-22 18:40:53.309972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:16.529 [2024-07-22 18:40:53.310016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:28960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.529 [2024-07-22 18:40:53.310040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:16.529 [2024-07-22 18:40:53.310072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:28968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.529 [2024-07-22 18:40:53.310095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:16.529 [2024-07-22 18:40:53.310125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:28976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.529 [2024-07-22 18:40:53.310149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:16.529 [2024-07-22 18:40:53.310180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:28984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.529 [2024-07-22 18:40:53.310201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:16.529 [2024-07-22 18:40:53.310231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:28992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.529 [2024-07-22 18:40:53.310253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:16.529 [2024-07-22 18:40:53.310284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.529 [2024-07-22 18:40:53.310306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:16.529 [2024-07-22 18:40:53.310335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:29008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.529 [2024-07-22 18:40:53.310357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:16.529 [2024-07-22 18:40:53.310387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:29016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.529 [2024-07-22 18:40:53.310408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:16.529 [2024-07-22 18:40:53.310441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.529 [2024-07-22 18:40:53.310463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:16.529 [2024-07-22 18:40:53.311051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:29032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.529 [2024-07-22 18:40:53.311088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:16.529 [2024-07-22 18:40:53.311129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:29040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.529 [2024-07-22 18:40:53.311154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:16.529 [2024-07-22 18:40:53.311187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:29048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.529 [2024-07-22 18:40:53.311223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:16.529 [2024-07-22 18:40:53.311257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:29056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.529 [2024-07-22 18:40:53.311279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:16.529 [2024-07-22 18:40:53.311310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:29064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.529 [2024-07-22 18:40:53.311332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:16.529 [2024-07-22 18:40:53.311362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:29072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.529 [2024-07-22 18:40:53.311384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:16.529 [2024-07-22 18:40:53.311415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:29080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.529 [2024-07-22 18:40:53.311437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.529 [2024-07-22 18:40:53.311468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:29088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.529 [2024-07-22 18:40:53.311489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:16.529 [2024-07-22 18:40:53.311519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:29096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.529 [2024-07-22 18:40:53.311540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:16.529 [2024-07-22 18:40:53.311571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:29104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.529 [2024-07-22 18:40:53.311592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:16.529 [2024-07-22 18:40:53.311623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:29112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.529 [2024-07-22 18:40:53.311645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:16.529 [2024-07-22 18:40:53.311675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:29120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.529 [2024-07-22 18:40:53.311696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:16.529 [2024-07-22 18:40:53.311728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:29128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.529 [2024-07-22 18:40:53.311750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:16.529 [2024-07-22 18:40:53.311780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:29136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.529 [2024-07-22 18:40:53.311802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:16.529 [2024-07-22 18:40:53.311848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:29144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.529 [2024-07-22 18:40:53.311883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:16.529 [2024-07-22 18:40:53.311919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:29152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.529 [2024-07-22 18:40:53.311943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:16.529 [2024-07-22 18:40:53.311974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:29160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.529 [2024-07-22 18:40:53.311995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:16.529 [2024-07-22 18:40:53.312027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:29168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.529 [2024-07-22 18:40:53.312048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:16.529 [2024-07-22 18:40:53.312079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:29176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.529 [2024-07-22 18:40:53.312101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:16.529 [2024-07-22 18:40:53.312131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:29184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.529 [2024-07-22 18:40:53.312153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:16.529 [2024-07-22 18:40:53.312185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:29192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.529 [2024-07-22 18:40:53.312206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:16.529 [2024-07-22 18:40:53.312237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:29200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.530 [2024-07-22 18:40:53.312258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:16.530 [2024-07-22 18:40:53.312312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:29208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.530 [2024-07-22 18:40:53.312336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:16.530 [2024-07-22 18:40:53.312367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.530 [2024-07-22 18:40:53.312388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:16.530 [2024-07-22 18:40:53.312420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:29224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.530 [2024-07-22 18:40:53.312441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:16.530 [2024-07-22 18:40:53.312472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:29232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.530 [2024-07-22 18:40:53.312493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:16.530 [2024-07-22 18:40:53.312525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:29240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.530 [2024-07-22 18:40:53.312547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:16.530 [2024-07-22 18:40:53.312603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:29248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.530 [2024-07-22 18:40:53.312626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:16.530 [2024-07-22 18:40:53.312656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:29256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.530 [2024-07-22 18:40:53.312677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:16.530 [2024-07-22 18:40:53.312707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:29264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.530 [2024-07-22 18:40:53.312728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:16.530 [2024-07-22 18:40:53.312759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:29272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.530 [2024-07-22 18:40:53.312781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:16.530 [2024-07-22 18:40:53.312810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:29280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.530 [2024-07-22 18:40:53.312831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:16.530 [2024-07-22 18:40:53.312896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:29288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.530 [2024-07-22 18:40:53.312921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:16.530 [2024-07-22 18:40:53.312954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:29296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.530 [2024-07-22 18:40:53.312976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:16.530 [2024-07-22 18:40:53.313007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:29304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.530 [2024-07-22 18:40:53.313029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:16.530 [2024-07-22 18:40:53.313059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:29312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.530 [2024-07-22 18:40:53.313080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:16.530 [2024-07-22 18:40:53.313111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:29320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.530 [2024-07-22 18:40:53.313132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:16.530 [2024-07-22 18:40:53.313163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:29328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.530 [2024-07-22 18:40:53.313184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:16.530 [2024-07-22 18:40:53.313215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:29336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.530 [2024-07-22 18:40:53.313236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:16.530 [2024-07-22 18:40:53.313277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:29344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.530 [2024-07-22 18:40:53.313300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:16.530 [2024-07-22 18:40:53.313332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:29352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.530 [2024-07-22 18:40:53.313353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:16.530 [2024-07-22 18:40:53.313389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:29360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.530 [2024-07-22 18:40:53.313411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:16.530 [2024-07-22 18:40:53.313442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:29368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.530 [2024-07-22 18:40:53.313464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:16.530 [2024-07-22 18:40:53.313495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:29376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.530 [2024-07-22 18:40:53.313517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:16.530 [2024-07-22 18:40:53.313548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:29384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.530 [2024-07-22 18:40:53.313569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:16.530 [2024-07-22 18:40:53.313614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:29392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.530 [2024-07-22 18:40:53.313635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:16.530 [2024-07-22 18:40:53.313665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:29400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.530 [2024-07-22 18:40:53.313686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:16.530 [2024-07-22 18:40:53.313715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:29408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.530 [2024-07-22 18:40:53.313736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:16.530 [2024-07-22 18:40:53.313766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:29416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.530 [2024-07-22 18:40:53.313787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:16.530 [2024-07-22 18:40:53.313816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:29424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.530 [2024-07-22 18:40:53.313837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:16.530 [2024-07-22 18:40:53.313898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:29432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.530 [2024-07-22 18:40:53.313922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:16.530 [2024-07-22 18:40:53.313953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:29440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.530 [2024-07-22 18:40:53.313984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:16.530 [2024-07-22 18:40:53.314037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:29448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.530 [2024-07-22 18:40:53.314061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:16.530 [2024-07-22 18:40:53.314092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:29456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.530 [2024-07-22 18:40:53.314114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:16.530 [2024-07-22 18:40:53.314144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:29464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.530 [2024-07-22 18:40:53.314165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:16.530 [2024-07-22 18:40:53.314195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:29472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.530 [2024-07-22 18:40:53.314217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:16.530 [2024-07-22 18:40:53.314248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:29480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.530 [2024-07-22 18:40:53.314270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:16.530 [2024-07-22 18:40:53.314313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:29488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.530 [2024-07-22 18:40:53.314340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:16.530 [2024-07-22 18:40:53.314376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:29496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.530 [2024-07-22 18:40:53.314398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:16.530 [2024-07-22 18:40:53.314428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:29504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.530 [2024-07-22 18:40:53.314450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:16.530 [2024-07-22 18:40:53.314479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:29512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.531 [2024-07-22 18:40:53.314511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:16.531 [2024-07-22 18:40:53.314540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:29520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.531 [2024-07-22 18:40:53.314562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:16.531 [2024-07-22 18:40:53.314607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:29528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.531 [2024-07-22 18:40:53.314628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:16.531 [2024-07-22 18:40:53.314657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:29536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.531 [2024-07-22 18:40:53.314678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:16.531 [2024-07-22 18:40:53.314716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:29544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.531 [2024-07-22 18:40:53.314739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:16.531 [2024-07-22 18:40:53.314768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:29552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.531 [2024-07-22 18:40:53.314789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:16.531 [2024-07-22 18:40:53.314817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:29560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.531 [2024-07-22 18:40:53.314837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:16.531 [2024-07-22 18:40:53.314902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:29568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.531 [2024-07-22 18:40:53.314926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:16.531 [2024-07-22 18:40:53.314956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:28736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.531 [2024-07-22 18:40:53.314978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:16.531 [2024-07-22 18:40:53.315008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:28744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.531 [2024-07-22 18:40:53.315029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:16.531 [2024-07-22 18:40:53.315060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:28752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.531 [2024-07-22 18:40:53.315083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:16.531 [2024-07-22 18:40:53.316641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:28760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.531 [2024-07-22 18:40:53.316679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:16.531 [2024-07-22 18:40:53.316721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:29576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.531 [2024-07-22 18:40:53.316745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:16.531 [2024-07-22 18:40:53.316777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:29584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.531 [2024-07-22 18:40:53.316799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:16.531 [2024-07-22 18:40:53.316839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.531 [2024-07-22 18:40:53.316898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:16.531 [2024-07-22 18:40:53.316932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:29600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.531 [2024-07-22 18:40:53.316956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:16.531 [2024-07-22 18:40:53.317001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:29608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.531 [2024-07-22 18:40:53.317025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:16.531 [2024-07-22 18:40:53.317055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:29616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.531 [2024-07-22 18:40:53.317077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:16.531 [2024-07-22 18:40:53.317107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:29624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.531 [2024-07-22 18:40:53.317129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:16.531 [2024-07-22 18:40:53.317159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:29632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.531 [2024-07-22 18:40:53.317181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:16.531 [2024-07-22 18:40:53.317210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:29640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.531 [2024-07-22 18:40:53.317247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:16.531 [2024-07-22 18:40:53.317293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:29648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.531 [2024-07-22 18:40:53.317315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:16.531 [2024-07-22 18:40:53.317345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:29656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.531 [2024-07-22 18:40:53.317367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:16.531 [2024-07-22 18:40:53.317397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.531 [2024-07-22 18:40:53.317426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:16.531 [2024-07-22 18:40:53.317456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:28776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.531 [2024-07-22 18:40:53.317478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:16.531 [2024-07-22 18:40:53.317508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:28784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.531 [2024-07-22 18:40:53.317529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:16.531 [2024-07-22 18:40:53.317579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:28792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.531 [2024-07-22 18:40:53.317602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:16.531 [2024-07-22 18:40:53.317648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:28800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.531 [2024-07-22 18:40:53.317669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:16.531 [2024-07-22 18:40:53.317708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:28808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.531 [2024-07-22 18:40:53.317731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:16.531 [2024-07-22 18:40:53.317760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:28816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.531 [2024-07-22 18:40:53.317781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:16.531 [2024-07-22 18:40:53.317811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:28824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.531 [2024-07-22 18:40:53.317831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:16.531 [2024-07-22 18:40:53.317861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:28832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.531 [2024-07-22 18:40:53.317897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:16.531 [2024-07-22 18:40:53.317931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:28840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.531 [2024-07-22 18:40:53.317953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:16.531 [2024-07-22 18:40:53.317982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:28848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.531 [2024-07-22 18:40:53.318016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:16.531 [2024-07-22 18:40:53.318066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:28856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.531 [2024-07-22 18:40:53.318088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:16.531 [2024-07-22 18:40:53.318117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:28864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.531 [2024-07-22 18:40:53.318138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:16.531 [2024-07-22 18:40:53.318168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:28872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.531 [2024-07-22 18:40:53.318190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:16.531 [2024-07-22 18:40:53.318219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:28880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.531 [2024-07-22 18:40:53.318241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:16.531 [2024-07-22 18:40:53.318271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:28888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.531 [2024-07-22 18:40:53.318293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:16.532 [2024-07-22 18:40:53.318323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:28896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.532 [2024-07-22 18:40:53.318344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:16.532 [2024-07-22 18:40:53.318375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:28904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.532 [2024-07-22 18:40:53.318412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:16.532 [2024-07-22 18:40:53.318446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:28912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.532 [2024-07-22 18:40:53.318469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:16.532 [2024-07-22 18:40:53.318500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:28920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.532 [2024-07-22 18:40:53.318522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:16.532 [2024-07-22 18:40:53.318553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:28928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.532 [2024-07-22 18:40:53.318574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:16.532 [2024-07-22 18:40:53.318618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:28936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.532 [2024-07-22 18:40:53.318640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:16.532 [2024-07-22 18:40:53.318669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.532 [2024-07-22 18:40:53.318690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:16.532 [2024-07-22 18:40:53.318720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:28952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.532 [2024-07-22 18:40:53.318741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:16.532 [2024-07-22 18:40:53.318770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:28960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.532 [2024-07-22 18:40:53.318791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:16.532 [2024-07-22 18:40:53.318821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.532 [2024-07-22 18:40:53.318842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:16.532 [2024-07-22 18:40:53.318885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:28976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.532 [2024-07-22 18:40:53.318910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:16.532 [2024-07-22 18:40:53.318942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:28984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.532 [2024-07-22 18:40:53.318963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:16.532 [2024-07-22 18:40:53.318993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:28992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.532 [2024-07-22 18:40:53.319013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:16.532 [2024-07-22 18:40:53.319043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:29000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.532 [2024-07-22 18:40:53.319072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:16.532 [2024-07-22 18:40:53.319104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:29008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.532 [2024-07-22 18:40:53.319125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:16.532 [2024-07-22 18:40:53.319156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:29016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.532 [2024-07-22 18:40:53.319178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:16.532 [2024-07-22 18:40:53.320017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.532 [2024-07-22 18:40:53.320055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:16.532 [2024-07-22 18:40:53.320094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:29032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.532 [2024-07-22 18:40:53.320120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:16.532 [2024-07-22 18:40:53.320151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:29040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.532 [2024-07-22 18:40:53.320174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:16.532 [2024-07-22 18:40:53.320204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:29048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.532 [2024-07-22 18:40:53.320225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:16.532 [2024-07-22 18:40:53.320255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:29056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.532 [2024-07-22 18:40:53.320277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:16.532 [2024-07-22 18:40:53.320307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:29064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.532 [2024-07-22 18:40:53.320328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:16.532 [2024-07-22 18:40:53.320358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:29072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.532 [2024-07-22 18:40:53.320379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:16.532 [2024-07-22 18:40:53.320410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:29080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.532 [2024-07-22 18:40:53.320431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:16.532 [2024-07-22 18:40:53.320460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:29088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.532 [2024-07-22 18:40:53.320482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:16.532 [2024-07-22 18:40:53.320511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:29096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.532 [2024-07-22 18:40:53.320533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:16.532 [2024-07-22 18:40:53.320578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:29104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.532 [2024-07-22 18:40:53.320616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:16.532 [2024-07-22 18:40:53.320645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:29112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.532 [2024-07-22 18:40:53.320667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:16.532 [2024-07-22 18:40:53.320696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:29120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.532 [2024-07-22 18:40:53.320716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:16.532 [2024-07-22 18:40:53.320745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:29128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.532 [2024-07-22 18:40:53.320766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:16.532 [2024-07-22 18:40:53.320794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:29136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.532 [2024-07-22 18:40:53.320815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:16.532 [2024-07-22 18:40:53.320843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:29144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.532 [2024-07-22 18:40:53.320885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:16.532 [2024-07-22 18:40:53.320917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:29152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.532 [2024-07-22 18:40:53.320938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:16.533 [2024-07-22 18:40:53.320967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:29160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.533 [2024-07-22 18:40:53.320988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:16.533 [2024-07-22 18:40:53.321017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:29168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.533 [2024-07-22 18:40:53.321037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:16.533 [2024-07-22 18:40:53.321066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:29176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.533 [2024-07-22 18:40:53.321087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.533 [2024-07-22 18:40:53.321116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:29184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.533 [2024-07-22 18:40:53.321136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:16.533 [2024-07-22 18:40:53.321165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:29192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.533 [2024-07-22 18:40:53.321186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:16.533 [2024-07-22 18:40:53.321225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:29200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.533 [2024-07-22 18:40:53.321264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:16.533 [2024-07-22 18:40:53.321294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:29208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.533 [2024-07-22 18:40:53.321315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:16.533 [2024-07-22 18:40:53.321346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:29216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.533 [2024-07-22 18:40:53.321367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:16.533 [2024-07-22 18:40:53.321397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:29224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.533 [2024-07-22 18:40:53.321418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:16.533 [2024-07-22 18:40:53.321448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:29232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.533 [2024-07-22 18:40:53.321470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:16.533 [2024-07-22 18:40:53.321500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:29240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.533 [2024-07-22 18:40:53.321521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:16.533 [2024-07-22 18:40:53.321551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:29248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.533 [2024-07-22 18:40:53.321573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:16.533 [2024-07-22 18:40:53.321618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:29256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.533 [2024-07-22 18:40:53.321638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:16.533 [2024-07-22 18:40:53.321667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:29264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.533 [2024-07-22 18:40:53.321695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:16.533 [2024-07-22 18:40:53.321725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:29272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.533 [2024-07-22 18:40:53.321746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:16.533 [2024-07-22 18:40:53.321781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:29280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.533 [2024-07-22 18:40:53.321803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:16.533 [2024-07-22 18:40:53.321832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:29288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.533 [2024-07-22 18:40:53.321853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:16.533 [2024-07-22 18:40:53.321898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:29296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.533 [2024-07-22 18:40:53.321932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:16.533 [2024-07-22 18:40:53.321981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:29304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.533 [2024-07-22 18:40:53.322031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:16.533 [2024-07-22 18:40:53.322067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:29312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.533 [2024-07-22 18:40:53.322090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:16.533 [2024-07-22 18:40:53.322120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:29320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.533 [2024-07-22 18:40:53.322142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:16.533 [2024-07-22 18:40:53.322172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:29328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.533 [2024-07-22 18:40:53.322193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:16.533 [2024-07-22 18:40:53.322223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:29336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.533 [2024-07-22 18:40:53.322245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:16.533 [2024-07-22 18:40:53.322274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:29344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.533 [2024-07-22 18:40:53.322295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:16.533 [2024-07-22 18:40:53.322326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:29352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.533 [2024-07-22 18:40:53.322348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:16.533 [2024-07-22 18:40:53.322377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:29360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.533 [2024-07-22 18:40:53.322399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:16.533 [2024-07-22 18:40:53.322428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:29368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.533 [2024-07-22 18:40:53.322450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:16.533 [2024-07-22 18:40:53.322480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:29376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.533 [2024-07-22 18:40:53.322504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:16.533 [2024-07-22 18:40:53.322534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:29384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.533 [2024-07-22 18:40:53.322555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:16.533 [2024-07-22 18:40:53.322599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:29392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.533 [2024-07-22 18:40:53.322639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:16.533 [2024-07-22 18:40:53.322671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.533 [2024-07-22 18:40:53.322693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:16.533 [2024-07-22 18:40:53.322723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:29408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.533 [2024-07-22 18:40:53.322744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:16.533 [2024-07-22 18:40:53.322773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:29416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.533 [2024-07-22 18:40:53.322794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:16.533 [2024-07-22 18:40:53.322823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:29424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.533 [2024-07-22 18:40:53.322844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:16.533 [2024-07-22 18:40:53.322888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:29432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.533 [2024-07-22 18:40:53.322913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:16.533 [2024-07-22 18:40:53.322943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:29440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.533 [2024-07-22 18:40:53.322964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:16.533 [2024-07-22 18:40:53.322993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:29448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.533 [2024-07-22 18:40:53.323013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:16.533 [2024-07-22 18:40:53.323041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:29456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.533 [2024-07-22 18:40:53.323062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:16.533 [2024-07-22 18:40:53.323090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:29464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.533 [2024-07-22 18:40:53.323111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:16.534 [2024-07-22 18:40:53.323140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:29472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.534 [2024-07-22 18:40:53.323161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:16.534 [2024-07-22 18:40:53.323190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:29480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.534 [2024-07-22 18:40:53.323227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:16.534 [2024-07-22 18:40:53.323256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:29488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.534 [2024-07-22 18:40:53.323277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:16.534 [2024-07-22 18:40:53.323316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:29496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.534 [2024-07-22 18:40:53.323339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:16.534 [2024-07-22 18:40:53.323369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:29504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.534 [2024-07-22 18:40:53.323390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:16.534 [2024-07-22 18:40:53.323420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:29512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.534 [2024-07-22 18:40:53.323441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:16.534 [2024-07-22 18:40:53.323471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:29520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.534 [2024-07-22 18:40:53.323492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:16.534 [2024-07-22 18:40:53.323521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:29528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.534 [2024-07-22 18:40:53.323542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:16.534 [2024-07-22 18:40:53.323571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:29536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.534 [2024-07-22 18:40:53.323593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:16.534 [2024-07-22 18:40:53.323623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:29544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.534 [2024-07-22 18:40:53.323644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:16.534 [2024-07-22 18:40:53.323687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:29552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.534 [2024-07-22 18:40:53.323708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:16.534 [2024-07-22 18:40:53.323738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:29560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.534 [2024-07-22 18:40:53.323759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:16.534 [2024-07-22 18:40:53.323787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:29568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.534 [2024-07-22 18:40:53.323808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:16.534 [2024-07-22 18:40:53.323837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:28736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.534 [2024-07-22 18:40:53.323873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:16.534 [2024-07-22 18:40:53.323921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:28744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.534 [2024-07-22 18:40:53.323945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:16.534 [2024-07-22 18:40:53.325006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:28752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.534 [2024-07-22 18:40:53.325046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:16.534 [2024-07-22 18:40:53.325086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:29664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.534 [2024-07-22 18:40:53.325118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:16.534 [2024-07-22 18:40:53.325149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:29672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.534 [2024-07-22 18:40:53.325171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:16.534 [2024-07-22 18:40:53.325201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:29680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.534 [2024-07-22 18:40:53.325222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:16.534 [2024-07-22 18:40:53.325252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:29688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.534 [2024-07-22 18:40:53.325274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:16.534 [2024-07-22 18:40:53.325304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:29696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.534 [2024-07-22 18:40:53.325325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:16.534 [2024-07-22 18:40:53.325355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:29704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.534 [2024-07-22 18:40:53.325376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:16.534 [2024-07-22 18:40:53.325407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:29712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.534 [2024-07-22 18:40:53.325431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:16.534 [2024-07-22 18:40:53.325460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:29720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.534 [2024-07-22 18:40:53.325482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:16.534 [2024-07-22 18:40:53.325513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:29728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.534 [2024-07-22 18:40:53.325534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:16.534 [2024-07-22 18:40:53.325564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:29736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.534 [2024-07-22 18:40:53.325585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:16.534 [2024-07-22 18:40:53.325616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:29744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.534 [2024-07-22 18:40:53.325637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:16.534 [2024-07-22 18:40:53.325682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:29752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.534 [2024-07-22 18:40:53.325731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:16.534 [2024-07-22 18:40:53.325763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:28760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.534 [2024-07-22 18:40:53.325785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:16.534 [2024-07-22 18:40:53.325815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:29576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.534 [2024-07-22 18:40:53.325837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:16.534 [2024-07-22 18:40:53.325867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:29584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.534 [2024-07-22 18:40:53.325906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:16.534 [2024-07-22 18:40:53.325949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:29592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.534 [2024-07-22 18:40:53.325971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:16.534 [2024-07-22 18:40:53.326019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:29600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.534 [2024-07-22 18:40:53.326046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:16.534 [2024-07-22 18:40:53.326077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.534 [2024-07-22 18:40:53.326098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:16.534 [2024-07-22 18:40:53.326129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:29616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.534 [2024-07-22 18:40:53.326150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:16.534 [2024-07-22 18:40:53.326180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:29624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.534 [2024-07-22 18:40:53.326201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:16.534 [2024-07-22 18:40:53.326230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:29632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.534 [2024-07-22 18:40:53.326252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:16.534 [2024-07-22 18:40:53.326281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:29640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.534 [2024-07-22 18:40:53.326303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:16.534 [2024-07-22 18:40:53.326333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:29648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.534 [2024-07-22 18:40:53.326355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:16.535 [2024-07-22 18:40:53.326384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:29656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.535 [2024-07-22 18:40:53.326414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:16.535 [2024-07-22 18:40:53.326448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:28768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.535 [2024-07-22 18:40:53.326470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:16.535 [2024-07-22 18:40:53.326500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:28776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.535 [2024-07-22 18:40:53.326521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:16.535 [2024-07-22 18:40:53.326552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:28784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.535 [2024-07-22 18:40:53.326573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:16.535 [2024-07-22 18:40:53.326637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:28792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.535 [2024-07-22 18:40:53.326658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:16.535 [2024-07-22 18:40:53.326687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:28800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.535 [2024-07-22 18:40:53.326707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:16.535 [2024-07-22 18:40:53.326737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:28808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.535 [2024-07-22 18:40:53.326758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:16.535 [2024-07-22 18:40:53.326787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.535 [2024-07-22 18:40:53.326808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:16.535 [2024-07-22 18:40:53.326837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:28824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.535 [2024-07-22 18:40:53.326890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:16.535 [2024-07-22 18:40:53.326924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:28832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.535 [2024-07-22 18:40:53.326947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:16.535 [2024-07-22 18:40:53.326977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:28840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.535 [2024-07-22 18:40:53.326998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:16.535 [2024-07-22 18:40:53.327028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:28848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.535 [2024-07-22 18:40:53.327049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:16.535 [2024-07-22 18:40:53.327090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:28856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.535 [2024-07-22 18:40:53.327111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:16.535 [2024-07-22 18:40:53.327152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:28864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.535 [2024-07-22 18:40:53.327176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:16.535 [2024-07-22 18:40:53.327207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:28872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.535 [2024-07-22 18:40:53.327228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:16.535 [2024-07-22 18:40:53.327258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:28880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.535 [2024-07-22 18:40:53.327280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:16.535 [2024-07-22 18:40:53.327309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:28888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.535 [2024-07-22 18:40:53.327331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:16.535 [2024-07-22 18:40:53.327362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:28896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.535 [2024-07-22 18:40:53.327383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:16.535 [2024-07-22 18:40:53.327412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:28904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.535 [2024-07-22 18:40:53.327433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:16.535 [2024-07-22 18:40:53.327464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:28912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.535 [2024-07-22 18:40:53.327486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:16.535 [2024-07-22 18:40:53.327516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:28920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.535 [2024-07-22 18:40:53.327537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:16.535 [2024-07-22 18:40:53.327566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:28928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.535 [2024-07-22 18:40:53.327588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:16.535 [2024-07-22 18:40:53.327633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.535 [2024-07-22 18:40:53.327654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:16.535 [2024-07-22 18:40:53.327683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:28944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.535 [2024-07-22 18:40:53.327703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:16.535 [2024-07-22 18:40:53.327732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:28952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.535 [2024-07-22 18:40:53.327752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:16.535 [2024-07-22 18:40:53.327789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.535 [2024-07-22 18:40:53.327811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:16.535 [2024-07-22 18:40:53.327841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:28968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.535 [2024-07-22 18:40:53.327894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:16.535 [2024-07-22 18:40:53.327927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:28976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.535 [2024-07-22 18:40:53.327949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:16.535 [2024-07-22 18:40:53.327979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:28984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.535 [2024-07-22 18:40:53.328002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:16.535 [2024-07-22 18:40:53.328033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:28992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.535 [2024-07-22 18:40:53.328054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:16.535 [2024-07-22 18:40:53.328084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:29000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.535 [2024-07-22 18:40:53.328105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:16.535 [2024-07-22 18:40:53.328137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:29008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.535 [2024-07-22 18:40:53.328159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:16.535 [2024-07-22 18:40:53.329013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:29016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.535 [2024-07-22 18:40:53.329051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:16.535 [2024-07-22 18:40:53.329100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:29024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.535 [2024-07-22 18:40:53.329124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:16.535 [2024-07-22 18:40:53.329156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:29032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.535 [2024-07-22 18:40:53.329179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:16.535 [2024-07-22 18:40:53.329209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:29040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.535 [2024-07-22 18:40:53.329244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:16.535 [2024-07-22 18:40:53.329291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:29048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.535 [2024-07-22 18:40:53.329313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:16.535 [2024-07-22 18:40:53.329343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:29056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.535 [2024-07-22 18:40:53.329377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:16.535 [2024-07-22 18:40:53.329409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:29064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.536 [2024-07-22 18:40:53.329431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:16.536 [2024-07-22 18:40:53.329462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:29072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.536 [2024-07-22 18:40:53.329483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:16.536 [2024-07-22 18:40:53.329513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:29080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.536 [2024-07-22 18:40:53.329533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:16.536 [2024-07-22 18:40:53.329563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:29088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.536 [2024-07-22 18:40:53.329585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:16.536 [2024-07-22 18:40:53.329629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:29096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.536 [2024-07-22 18:40:53.329650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:16.536 [2024-07-22 18:40:53.329679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:29104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.536 [2024-07-22 18:40:53.329698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:16.536 [2024-07-22 18:40:53.329727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:29112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.536 [2024-07-22 18:40:53.329749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:16.536 [2024-07-22 18:40:53.329778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:29120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.536 [2024-07-22 18:40:53.329798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:16.536 [2024-07-22 18:40:53.329827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:29128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.536 [2024-07-22 18:40:53.329863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:16.536 [2024-07-22 18:40:53.329910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:29136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.536 [2024-07-22 18:40:53.329935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:16.536 [2024-07-22 18:40:53.329967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:29144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.536 [2024-07-22 18:40:53.329989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:16.536 [2024-07-22 18:40:53.330036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:29152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.536 [2024-07-22 18:40:53.330069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:16.536 [2024-07-22 18:40:53.330101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:29160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.536 [2024-07-22 18:40:53.330123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:16.536 [2024-07-22 18:40:53.330153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:29168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.536 [2024-07-22 18:40:53.330174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:16.536 [2024-07-22 18:40:53.330203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:29176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.536 [2024-07-22 18:40:53.330224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.536 [2024-07-22 18:40:53.330254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:29184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.536 [2024-07-22 18:40:53.330275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:16.536 [2024-07-22 18:40:53.330304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:29192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.536 [2024-07-22 18:40:53.330325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:16.536 [2024-07-22 18:40:53.330355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.536 [2024-07-22 18:40:53.330376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:16.536 [2024-07-22 18:40:53.330405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:29208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.536 [2024-07-22 18:40:53.330426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:16.536 [2024-07-22 18:40:53.330455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:29216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.536 [2024-07-22 18:40:53.330478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:16.536 [2024-07-22 18:40:53.330508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:29224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.536 [2024-07-22 18:40:53.330529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:16.536 [2024-07-22 18:40:53.330558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:29232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.536 [2024-07-22 18:40:53.330579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:16.536 [2024-07-22 18:40:53.330639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:29240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.536 [2024-07-22 18:40:53.330661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:16.536 [2024-07-22 18:40:53.330690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:29248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.536 [2024-07-22 18:40:53.330711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:16.536 [2024-07-22 18:40:53.330749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:29256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.536 [2024-07-22 18:40:53.330772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:16.536 [2024-07-22 18:40:53.330803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:29264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.536 [2024-07-22 18:40:53.330824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:16.536 [2024-07-22 18:40:53.330853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:29272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.536 [2024-07-22 18:40:53.330891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:16.536 [2024-07-22 18:40:53.330924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:29280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.536 [2024-07-22 18:40:53.330946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:16.536 [2024-07-22 18:40:53.330977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:29288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.536 [2024-07-22 18:40:53.331168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:16.536 [2024-07-22 18:40:53.331218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:29296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.536 [2024-07-22 18:40:53.331243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:16.536 [2024-07-22 18:40:53.331294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:29304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.536 [2024-07-22 18:40:53.331317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:16.536 [2024-07-22 18:40:53.331347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:29312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.536 [2024-07-22 18:40:53.331369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:16.536 [2024-07-22 18:40:53.331399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:29320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.536 [2024-07-22 18:40:53.331420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:16.536 [2024-07-22 18:40:53.331450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:29328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.536 [2024-07-22 18:40:53.331471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:16.536 [2024-07-22 18:40:53.331501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:29336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.536 [2024-07-22 18:40:53.331522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:16.536 [2024-07-22 18:40:53.331551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:29344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.536 [2024-07-22 18:40:53.331572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:16.536 [2024-07-22 18:40:53.331615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:29352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.536 [2024-07-22 18:40:53.331638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:16.536 [2024-07-22 18:40:53.331669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:29360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.536 [2024-07-22 18:40:53.331690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:16.536 [2024-07-22 18:40:53.331720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:29368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.536 [2024-07-22 18:40:53.331742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:16.537 [2024-07-22 18:40:53.331774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:29376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.537 [2024-07-22 18:40:53.331795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:16.537 [2024-07-22 18:40:53.331824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:29384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.537 [2024-07-22 18:40:53.331864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:16.537 [2024-07-22 18:40:53.331898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:29392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.537 [2024-07-22 18:40:53.331921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:16.537 [2024-07-22 18:40:53.331950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:29400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.537 [2024-07-22 18:40:53.331971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:16.537 [2024-07-22 18:40:53.332001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:29408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.537 [2024-07-22 18:40:53.332022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:16.537 [2024-07-22 18:40:53.332053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:29416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.537 [2024-07-22 18:40:53.332074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:16.537 [2024-07-22 18:40:53.332103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:29424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.537 [2024-07-22 18:40:53.332124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:16.537 [2024-07-22 18:40:53.332155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:29432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.537 [2024-07-22 18:40:53.332176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:16.537 [2024-07-22 18:40:53.332207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:29440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.537 [2024-07-22 18:40:53.332228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:16.537 [2024-07-22 18:40:53.332258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:29448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.537 [2024-07-22 18:40:53.332288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:16.537 [2024-07-22 18:40:53.332321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:29456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.537 [2024-07-22 18:40:53.332499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:16.537 [2024-07-22 18:40:53.332632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:29464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.537 [2024-07-22 18:40:53.332734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:16.537 [2024-07-22 18:40:53.332772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:29472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.537 [2024-07-22 18:40:53.332797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:16.537 [2024-07-22 18:40:53.332829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:29480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.537 [2024-07-22 18:40:53.332868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:16.537 [2024-07-22 18:40:53.332902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:29488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.537 [2024-07-22 18:40:53.332924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:16.537 [2024-07-22 18:40:53.332955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:29496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.537 [2024-07-22 18:40:53.332977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:16.537 [2024-07-22 18:40:53.333007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:29504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.537 [2024-07-22 18:40:53.333029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:16.537 [2024-07-22 18:40:53.333059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:29512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.537 [2024-07-22 18:40:53.333080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:16.537 [2024-07-22 18:40:53.333110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:29520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.537 [2024-07-22 18:40:53.333132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:16.537 [2024-07-22 18:40:53.333163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:29528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.537 [2024-07-22 18:40:53.333184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:16.537 [2024-07-22 18:40:53.333214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:29536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.537 [2024-07-22 18:40:53.333236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:16.537 [2024-07-22 18:40:53.333266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:29544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.537 [2024-07-22 18:40:53.333298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:16.537 [2024-07-22 18:40:53.333331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:29552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.537 [2024-07-22 18:40:53.333353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:16.537 [2024-07-22 18:40:53.333383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:29560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.537 [2024-07-22 18:40:53.333404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:16.537 [2024-07-22 18:40:53.333434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:29568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.537 [2024-07-22 18:40:53.333455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:16.537 [2024-07-22 18:40:53.333486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:28736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.537 [2024-07-22 18:40:53.333508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:16.537 [2024-07-22 18:40:53.334607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:28744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.537 [2024-07-22 18:40:53.334644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:16.537 [2024-07-22 18:40:53.334683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:28752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.537 [2024-07-22 18:40:53.334707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:16.537 [2024-07-22 18:40:53.334738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:29664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.537 [2024-07-22 18:40:53.334760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:16.537 [2024-07-22 18:40:53.334790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.537 [2024-07-22 18:40:53.334811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:16.537 [2024-07-22 18:40:53.334841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:29680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.537 [2024-07-22 18:40:53.334900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:16.537 [2024-07-22 18:40:53.334934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:29688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.537 [2024-07-22 18:40:53.334958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:16.537 [2024-07-22 18:40:53.334989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.537 [2024-07-22 18:40:53.335011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:16.537 [2024-07-22 18:40:53.335041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.538 [2024-07-22 18:40:53.335062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:16.538 [2024-07-22 18:40:53.335110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:29712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.538 [2024-07-22 18:40:53.335133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:16.538 [2024-07-22 18:40:53.335164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:29720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.538 [2024-07-22 18:40:53.335185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:16.538 [2024-07-22 18:40:53.335215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:29728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.538 [2024-07-22 18:40:53.335252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:16.538 [2024-07-22 18:40:53.335281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:29736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.538 [2024-07-22 18:40:53.335302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:16.538 [2024-07-22 18:40:53.335331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.538 [2024-07-22 18:40:53.335352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:16.538 [2024-07-22 18:40:53.335381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:29752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.538 [2024-07-22 18:40:53.335402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:16.538 [2024-07-22 18:40:53.335431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:28760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.538 [2024-07-22 18:40:53.335452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:16.538 [2024-07-22 18:40:53.335481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.538 [2024-07-22 18:40:53.335502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:16.538 [2024-07-22 18:40:53.335532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:29584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.538 [2024-07-22 18:40:53.335553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:16.538 [2024-07-22 18:40:53.335582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:29592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.538 [2024-07-22 18:40:53.335602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:16.538 [2024-07-22 18:40:53.335631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:29600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.538 [2024-07-22 18:40:53.335652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:16.538 [2024-07-22 18:40:53.335681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:29608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.538 [2024-07-22 18:40:53.335702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:16.538 [2024-07-22 18:40:53.335740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:29616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.538 [2024-07-22 18:40:53.335763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:16.538 [2024-07-22 18:40:53.335793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:29624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.538 [2024-07-22 18:40:53.335814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:16.538 [2024-07-22 18:40:53.335858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:29632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.538 [2024-07-22 18:40:53.335896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:16.538 [2024-07-22 18:40:53.335928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:29640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.538 [2024-07-22 18:40:53.335951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:16.538 [2024-07-22 18:40:53.335982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:29648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.538 [2024-07-22 18:40:53.336003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:16.538 [2024-07-22 18:40:53.336038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:29656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.538 [2024-07-22 18:40:53.336059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:16.538 [2024-07-22 18:40:53.336091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:28768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.538 [2024-07-22 18:40:53.336112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:16.538 [2024-07-22 18:40:53.336143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:28776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.538 [2024-07-22 18:40:53.336164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:16.538 [2024-07-22 18:40:53.336194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:28784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.538 [2024-07-22 18:40:53.336216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:16.538 [2024-07-22 18:40:53.336280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:28792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.538 [2024-07-22 18:40:53.336301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:16.538 [2024-07-22 18:40:53.336331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:28800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.538 [2024-07-22 18:40:53.336352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:16.538 [2024-07-22 18:40:53.336382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:28808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.538 [2024-07-22 18:40:53.336402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:16.538 [2024-07-22 18:40:53.336432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:28816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.538 [2024-07-22 18:40:53.336462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:16.538 [2024-07-22 18:40:53.336493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:28824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.538 [2024-07-22 18:40:53.336514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:16.538 [2024-07-22 18:40:53.336544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:28832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.538 [2024-07-22 18:40:53.336565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:16.538 [2024-07-22 18:40:53.336594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:28840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.538 [2024-07-22 18:40:53.336615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:16.538 [2024-07-22 18:40:53.336645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:28848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.538 [2024-07-22 18:40:53.336666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:16.538 [2024-07-22 18:40:53.336695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:28856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.538 [2024-07-22 18:40:53.336716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:16.538 [2024-07-22 18:40:53.336745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:28864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.538 [2024-07-22 18:40:53.336765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:16.538 [2024-07-22 18:40:53.336803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:28872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.538 [2024-07-22 18:40:53.336824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:16.538 [2024-07-22 18:40:53.336885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:28880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.538 [2024-07-22 18:40:53.336909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:16.538 [2024-07-22 18:40:53.336942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:28888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.538 [2024-07-22 18:40:53.336964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:16.538 [2024-07-22 18:40:53.336995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:28896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.538 [2024-07-22 18:40:53.337017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:16.538 [2024-07-22 18:40:53.337047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:28904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.538 [2024-07-22 18:40:53.337068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:16.538 [2024-07-22 18:40:53.337106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:28912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.538 [2024-07-22 18:40:53.337137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:16.538 [2024-07-22 18:40:53.337171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:28920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.539 [2024-07-22 18:40:53.337193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:16.539 [2024-07-22 18:40:53.337224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.539 [2024-07-22 18:40:53.337245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:16.539 [2024-07-22 18:40:53.337290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:28936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.539 [2024-07-22 18:40:53.337310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:16.539 [2024-07-22 18:40:53.337340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:28944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.539 [2024-07-22 18:40:53.337361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:16.539 [2024-07-22 18:40:53.337391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.539 [2024-07-22 18:40:53.337427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:16.539 [2024-07-22 18:40:53.337457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:28960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.539 [2024-07-22 18:40:53.337478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:16.539 [2024-07-22 18:40:53.337511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:28968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.539 [2024-07-22 18:40:53.337532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:16.539 [2024-07-22 18:40:53.337562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:28976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.539 [2024-07-22 18:40:53.337583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:16.539 [2024-07-22 18:40:53.337612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:28984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.539 [2024-07-22 18:40:53.337633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:16.539 [2024-07-22 18:40:53.337664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.539 [2024-07-22 18:40:53.337685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:16.539 [2024-07-22 18:40:53.337716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:29000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.539 [2024-07-22 18:40:53.337738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:16.539 [2024-07-22 18:40:53.338534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.539 [2024-07-22 18:40:53.338572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:16.539 [2024-07-22 18:40:53.338656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:29016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.539 [2024-07-22 18:40:53.338685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:16.539 [2024-07-22 18:40:53.338717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:29024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.539 [2024-07-22 18:40:53.338738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:16.539 [2024-07-22 18:40:53.338767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:29032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.539 [2024-07-22 18:40:53.338788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:16.539 [2024-07-22 18:40:53.338818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:29040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.539 [2024-07-22 18:40:53.338839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:16.539 [2024-07-22 18:40:53.338906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:29048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.539 [2024-07-22 18:40:53.338930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:16.539 [2024-07-22 18:40:53.338961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:29056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.539 [2024-07-22 18:40:53.338982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:16.539 [2024-07-22 18:40:53.339013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:29064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.539 [2024-07-22 18:40:53.339034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:16.539 [2024-07-22 18:40:53.339064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:29072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.539 [2024-07-22 18:40:53.339085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:16.539 [2024-07-22 18:40:53.339115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:29080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.539 [2024-07-22 18:40:53.339136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:16.539 [2024-07-22 18:40:53.339166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:29088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.539 [2024-07-22 18:40:53.339187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:16.539 [2024-07-22 18:40:53.339217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:29096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.539 [2024-07-22 18:40:53.339253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:16.539 [2024-07-22 18:40:53.339282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:29104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.539 [2024-07-22 18:40:53.339302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:16.539 [2024-07-22 18:40:53.339358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:29112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.539 [2024-07-22 18:40:53.339380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:16.539 [2024-07-22 18:40:53.339411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:29120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.539 [2024-07-22 18:40:53.339433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:16.539 [2024-07-22 18:40:53.339463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:29128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.539 [2024-07-22 18:40:53.339485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:16.539 [2024-07-22 18:40:53.339514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:29136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.539 [2024-07-22 18:40:53.339535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:16.539 [2024-07-22 18:40:53.339564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:29144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.539 [2024-07-22 18:40:53.339585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:16.539 [2024-07-22 18:40:53.339615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:29152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.539 [2024-07-22 18:40:53.339636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:16.539 [2024-07-22 18:40:53.339665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:29160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.539 [2024-07-22 18:40:53.339701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:16.539 [2024-07-22 18:40:53.339730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:29168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.539 [2024-07-22 18:40:53.339750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:16.539 [2024-07-22 18:40:53.339779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:29176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.539 [2024-07-22 18:40:53.339799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.539 [2024-07-22 18:40:53.339828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:29184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.539 [2024-07-22 18:40:53.339863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:16.539 [2024-07-22 18:40:53.339907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:29192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.539 [2024-07-22 18:40:53.339932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:16.539 [2024-07-22 18:40:53.339970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:29200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.539 [2024-07-22 18:40:53.339992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:16.539 [2024-07-22 18:40:53.340022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:29208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.539 [2024-07-22 18:40:53.340053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:16.539 [2024-07-22 18:40:53.340085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:29216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.539 [2024-07-22 18:40:53.340107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:16.539 [2024-07-22 18:40:53.340137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:29224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.539 [2024-07-22 18:40:53.340158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:16.539 [2024-07-22 18:40:53.340188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:29232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.540 [2024-07-22 18:40:53.340209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:16.540 [2024-07-22 18:40:53.340239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:29240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.540 [2024-07-22 18:40:53.340260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:16.540 [2024-07-22 18:40:53.340290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:29248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.540 [2024-07-22 18:40:53.340311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:16.540 [2024-07-22 18:40:53.340342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:29256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.540 [2024-07-22 18:40:53.340364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:16.540 [2024-07-22 18:40:53.340394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:29264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.540 [2024-07-22 18:40:53.340416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:16.540 [2024-07-22 18:40:53.340447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:29272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.540 [2024-07-22 18:40:53.340468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:16.540 [2024-07-22 18:40:53.340498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:29280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.540 [2024-07-22 18:40:53.340519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:16.540 [2024-07-22 18:40:53.340560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:29288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.540 [2024-07-22 18:40:53.340581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:16.540 [2024-07-22 18:40:53.340611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:29296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.540 [2024-07-22 18:40:53.340631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:16.540 [2024-07-22 18:40:53.340678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:29304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.540 [2024-07-22 18:40:53.340707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:16.540 [2024-07-22 18:40:53.340739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:29312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.540 [2024-07-22 18:40:53.340760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:16.540 [2024-07-22 18:40:53.340790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:29320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.540 [2024-07-22 18:40:53.340810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:16.540 [2024-07-22 18:40:53.340840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:29328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.540 [2024-07-22 18:40:53.340894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:16.540 [2024-07-22 18:40:53.340926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:29336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.540 [2024-07-22 18:40:53.340949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:16.540 [2024-07-22 18:40:53.340978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:29344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.540 [2024-07-22 18:40:53.340999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:16.540 [2024-07-22 18:40:53.341029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:29352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.540 [2024-07-22 18:40:53.341050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:16.540 [2024-07-22 18:40:53.341080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:29360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.540 [2024-07-22 18:40:53.341101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:16.540 [2024-07-22 18:40:53.341132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:29368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.540 [2024-07-22 18:40:53.341153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:16.540 [2024-07-22 18:40:53.341184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:29376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.540 [2024-07-22 18:40:53.341219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:16.540 [2024-07-22 18:40:53.341248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.540 [2024-07-22 18:40:53.341268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:16.540 [2024-07-22 18:40:53.341299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:29392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.540 [2024-07-22 18:40:53.341320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:16.540 [2024-07-22 18:40:53.341350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:29400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.540 [2024-07-22 18:40:53.341370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:16.540 [2024-07-22 18:40:53.341410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:29408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.540 [2024-07-22 18:40:53.341432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:16.540 [2024-07-22 18:40:53.341462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:29416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.540 [2024-07-22 18:40:53.341483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:16.540 [2024-07-22 18:40:53.341513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:29424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.540 [2024-07-22 18:40:53.341533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:16.540 [2024-07-22 18:40:53.341562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:29432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.540 [2024-07-22 18:40:53.341583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:16.540 [2024-07-22 18:40:53.341612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:29440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.540 [2024-07-22 18:40:53.341632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:16.540 [2024-07-22 18:40:53.341661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:29448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.540 [2024-07-22 18:40:53.341681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:16.540 [2024-07-22 18:40:53.341712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:29456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.540 [2024-07-22 18:40:53.341732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:16.540 [2024-07-22 18:40:53.341761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:29464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.540 [2024-07-22 18:40:53.341781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:16.540 [2024-07-22 18:40:53.341811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:29472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.540 [2024-07-22 18:40:53.341831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:16.540 [2024-07-22 18:40:53.341894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:29480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.540 [2024-07-22 18:40:53.341919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:16.540 [2024-07-22 18:40:53.341950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:29488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.540 [2024-07-22 18:40:53.341971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:16.540 [2024-07-22 18:40:53.342013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:29496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.540 [2024-07-22 18:40:53.342044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:16.540 [2024-07-22 18:40:53.342085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:29504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.540 [2024-07-22 18:40:53.342108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:16.540 [2024-07-22 18:40:53.342138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:29512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.540 [2024-07-22 18:40:53.342159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:16.540 [2024-07-22 18:40:53.342189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:29520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.540 [2024-07-22 18:40:53.342210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:16.540 [2024-07-22 18:40:53.342243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:29528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.540 [2024-07-22 18:40:53.342264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:16.540 [2024-07-22 18:40:53.342294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:29536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.540 [2024-07-22 18:40:53.342315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:16.540 [2024-07-22 18:40:53.342346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:29544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.541 [2024-07-22 18:40:53.342367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:16.541 [2024-07-22 18:40:53.342397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:29552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.541 [2024-07-22 18:40:53.342418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:16.541 [2024-07-22 18:40:53.342448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:29560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.541 [2024-07-22 18:40:53.342469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:16.541 [2024-07-22 18:40:53.342500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:29568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.541 [2024-07-22 18:40:53.342522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:16.541 [2024-07-22 18:40:53.343574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:28736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.541 [2024-07-22 18:40:53.343609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:16.541 [2024-07-22 18:40:53.343648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.541 [2024-07-22 18:40:53.343671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:16.541 [2024-07-22 18:40:53.343700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:28752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.541 [2024-07-22 18:40:53.343721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:16.541 [2024-07-22 18:40:53.343750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:29664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.541 [2024-07-22 18:40:53.343784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:16.541 [2024-07-22 18:40:53.343818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:29672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.541 [2024-07-22 18:40:53.343839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:16.541 [2024-07-22 18:40:53.343908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:29680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.541 [2024-07-22 18:40:53.343931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:16.541 [2024-07-22 18:40:53.343962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:29688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.541 [2024-07-22 18:40:53.343984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:16.541 [2024-07-22 18:40:53.344013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:29696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.541 [2024-07-22 18:40:53.344034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:16.541 [2024-07-22 18:40:53.344064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:29704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.541 [2024-07-22 18:40:53.344085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:16.541 [2024-07-22 18:40:53.344115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:29712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.541 [2024-07-22 18:40:53.344136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:16.541 [2024-07-22 18:40:53.344166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:29720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.541 [2024-07-22 18:40:53.344186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:16.541 [2024-07-22 18:40:53.344216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:29728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.541 [2024-07-22 18:40:53.344237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:16.541 [2024-07-22 18:40:53.344267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:29736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.541 [2024-07-22 18:40:53.344289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:16.541 [2024-07-22 18:40:53.344334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:29744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.541 [2024-07-22 18:40:53.344354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:16.541 [2024-07-22 18:40:53.344384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:29752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.541 [2024-07-22 18:40:53.344404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:16.541 [2024-07-22 18:40:53.344434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:28760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.541 [2024-07-22 18:40:53.344463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:16.541 [2024-07-22 18:40:53.344495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:29576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.541 [2024-07-22 18:40:53.344517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:16.541 [2024-07-22 18:40:53.344547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:29584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.541 [2024-07-22 18:40:53.344567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:16.541 [2024-07-22 18:40:53.344596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.541 [2024-07-22 18:40:53.344616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:16.541 [2024-07-22 18:40:53.344645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:29600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.541 [2024-07-22 18:40:53.344665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:16.541 [2024-07-22 18:40:53.344695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:29608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.541 [2024-07-22 18:40:53.344715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:16.541 [2024-07-22 18:40:53.344744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:29616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.541 [2024-07-22 18:40:53.344764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:16.541 [2024-07-22 18:40:53.344794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:29624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.541 [2024-07-22 18:40:53.344814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:16.541 [2024-07-22 18:40:53.344859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:29632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.541 [2024-07-22 18:40:53.344896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:16.541 [2024-07-22 18:40:53.344928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:29640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.541 [2024-07-22 18:40:53.344951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:16.541 [2024-07-22 18:40:53.344987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:29648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.541 [2024-07-22 18:40:53.345008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:16.541 [2024-07-22 18:40:53.345038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:29656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.541 [2024-07-22 18:40:53.345060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:16.541 [2024-07-22 18:40:53.345089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:28768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.541 [2024-07-22 18:40:53.345110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:16.541 [2024-07-22 18:40:53.345150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:28776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.541 [2024-07-22 18:40:53.345172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:16.541 [2024-07-22 18:40:53.345203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:28784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.542 [2024-07-22 18:40:53.345239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:16.542 [2024-07-22 18:40:53.345287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:28792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.542 [2024-07-22 18:40:53.345309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:16.542 [2024-07-22 18:40:53.345338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.542 [2024-07-22 18:40:53.345358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:16.542 [2024-07-22 18:40:53.345387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:28808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.542 [2024-07-22 18:40:53.345409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:16.542 [2024-07-22 18:40:53.345455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:28816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.542 [2024-07-22 18:40:53.345476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:16.542 [2024-07-22 18:40:53.345506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:28824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.542 [2024-07-22 18:40:53.345527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:16.542 [2024-07-22 18:40:53.345557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:28832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.542 [2024-07-22 18:40:53.345579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:16.542 [2024-07-22 18:40:53.345609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:28840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.542 [2024-07-22 18:40:53.345630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:16.542 [2024-07-22 18:40:53.345660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:28848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.542 [2024-07-22 18:40:53.345681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:16.542 [2024-07-22 18:40:53.345712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:28856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.542 [2024-07-22 18:40:53.345733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:16.542 [2024-07-22 18:40:53.345764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:28864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.542 [2024-07-22 18:40:53.345785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:16.542 [2024-07-22 18:40:53.345826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:28872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.542 [2024-07-22 18:40:53.345849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:16.542 [2024-07-22 18:40:53.345896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:28880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.542 [2024-07-22 18:40:53.345920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:16.542 [2024-07-22 18:40:53.345951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:28888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.542 [2024-07-22 18:40:53.345972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:16.542 [2024-07-22 18:40:53.346016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:28896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.542 [2024-07-22 18:40:53.346041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:16.542 [2024-07-22 18:40:53.346074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:28904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.542 [2024-07-22 18:40:53.346095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:16.542 [2024-07-22 18:40:53.346127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:28912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.542 [2024-07-22 18:40:53.346148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:16.542 [2024-07-22 18:40:53.346179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.542 [2024-07-22 18:40:53.346200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:16.542 [2024-07-22 18:40:53.346230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:28928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.542 [2024-07-22 18:40:53.346251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:16.542 [2024-07-22 18:40:53.346283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:28936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.542 [2024-07-22 18:40:53.346304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:16.542 [2024-07-22 18:40:53.346333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.542 [2024-07-22 18:40:53.346354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:16.542 [2024-07-22 18:40:53.346384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:28952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.542 [2024-07-22 18:40:53.346405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:16.542 [2024-07-22 18:40:53.346435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:28960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.542 [2024-07-22 18:40:53.346456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:16.542 [2024-07-22 18:40:53.346486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:28968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.542 [2024-07-22 18:40:53.346517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:16.542 [2024-07-22 18:40:53.346550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:28976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.542 [2024-07-22 18:40:53.346572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:16.542 [2024-07-22 18:40:53.346616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:28984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.542 [2024-07-22 18:40:53.346637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:16.542 [2024-07-22 18:40:53.346679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:28992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.542 [2024-07-22 18:40:53.346700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:16.542 [2024-07-22 18:40:53.347528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:29000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.542 [2024-07-22 18:40:53.347563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:16.542 [2024-07-22 18:40:53.347601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:29008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.542 [2024-07-22 18:40:53.347624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:16.542 [2024-07-22 18:40:53.347655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:29016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.542 [2024-07-22 18:40:53.347675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:16.542 [2024-07-22 18:40:53.347705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:29024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.542 [2024-07-22 18:40:53.347726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:16.542 [2024-07-22 18:40:53.347758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:29032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.542 [2024-07-22 18:40:53.347779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:16.542 [2024-07-22 18:40:53.347808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:29040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.542 [2024-07-22 18:40:53.347829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:16.542 [2024-07-22 18:40:53.347895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:29048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.542 [2024-07-22 18:40:53.347921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:16.542 [2024-07-22 18:40:53.347971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:29056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.542 [2024-07-22 18:40:53.348001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:16.542 [2024-07-22 18:40:53.348032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:29064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.542 [2024-07-22 18:40:53.348065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:16.542 [2024-07-22 18:40:53.348099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:29072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.542 [2024-07-22 18:40:53.348121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:16.542 [2024-07-22 18:40:53.348152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:29080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.542 [2024-07-22 18:40:53.348173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:16.542 [2024-07-22 18:40:53.348203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:29088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.542 [2024-07-22 18:40:53.348224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:16.543 [2024-07-22 18:40:53.348269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:29096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.543 [2024-07-22 18:40:53.348289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:16.543 [2024-07-22 18:40:53.348326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:29104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.543 [2024-07-22 18:40:53.348347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:16.543 [2024-07-22 18:40:53.348376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:29112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.543 [2024-07-22 18:40:53.348397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:16.543 [2024-07-22 18:40:53.348427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:29120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.543 [2024-07-22 18:40:53.348447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:16.543 [2024-07-22 18:40:53.348476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:29128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.543 [2024-07-22 18:40:53.348497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:16.543 [2024-07-22 18:40:53.348527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:29136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.543 [2024-07-22 18:40:53.348548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:16.543 [2024-07-22 18:40:53.348577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:29144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.543 [2024-07-22 18:40:53.348615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:16.543 [2024-07-22 18:40:53.348645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:29152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.543 [2024-07-22 18:40:53.348667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:16.543 [2024-07-22 18:40:53.348697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:29160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.543 [2024-07-22 18:40:53.348719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:16.543 [2024-07-22 18:40:53.348759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:29168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.543 [2024-07-22 18:40:53.348781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:16.543 [2024-07-22 18:40:53.348811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:29176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.543 [2024-07-22 18:40:53.348832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.543 [2024-07-22 18:40:53.348877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.543 [2024-07-22 18:40:53.348902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:16.543 [2024-07-22 18:40:53.348939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:29192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.543 [2024-07-22 18:40:53.348962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:16.543 [2024-07-22 18:40:53.348992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:29200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.543 [2024-07-22 18:40:53.349014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:16.543 [2024-07-22 18:40:53.349044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:29208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.543 [2024-07-22 18:40:53.349065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:16.543 [2024-07-22 18:40:53.349095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:29216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.543 [2024-07-22 18:40:53.349116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:16.543 [2024-07-22 18:40:53.349146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:29224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.543 [2024-07-22 18:40:53.349168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:16.543 [2024-07-22 18:40:53.349198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:29232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.543 [2024-07-22 18:40:53.349219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:16.543 [2024-07-22 18:40:53.349249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:29240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.543 [2024-07-22 18:40:53.349271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:16.543 [2024-07-22 18:40:53.349302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:29248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.543 [2024-07-22 18:40:53.349324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:16.543 [2024-07-22 18:40:53.349353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:29256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.543 [2024-07-22 18:40:53.349391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:16.543 [2024-07-22 18:40:53.349429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:29264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.543 [2024-07-22 18:40:53.349451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:16.543 [2024-07-22 18:40:53.349480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:29272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.543 [2024-07-22 18:40:53.349500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:16.543 [2024-07-22 18:40:53.349528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:29280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.543 [2024-07-22 18:40:53.349549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:16.543 [2024-07-22 18:40:53.349581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:29288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.543 [2024-07-22 18:40:53.349601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:16.543 [2024-07-22 18:40:53.349630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:29296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.543 [2024-07-22 18:40:53.349651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:16.543 [2024-07-22 18:40:53.349727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:29304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.543 [2024-07-22 18:40:53.349749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:16.543 [2024-07-22 18:40:53.349778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:29312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.543 [2024-07-22 18:40:53.349799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:16.543 [2024-07-22 18:40:53.349828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:29320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.543 [2024-07-22 18:40:53.349884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:16.543 [2024-07-22 18:40:53.349920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:29328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.543 [2024-07-22 18:40:53.349951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:16.543 [2024-07-22 18:40:53.349982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:29336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.543 [2024-07-22 18:40:53.350014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:16.543 [2024-07-22 18:40:53.350054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:29344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.543 [2024-07-22 18:40:53.350085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:16.543 [2024-07-22 18:40:53.350116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:29352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.543 [2024-07-22 18:40:53.350137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:16.543 [2024-07-22 18:40:53.350167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:29360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.543 [2024-07-22 18:40:53.350200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:16.543 [2024-07-22 18:40:53.350231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:29368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.543 [2024-07-22 18:40:53.350253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:16.543 [2024-07-22 18:40:53.350283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:29376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.543 [2024-07-22 18:40:53.350304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:16.543 [2024-07-22 18:40:53.350335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:29384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.543 [2024-07-22 18:40:53.350356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:16.543 [2024-07-22 18:40:53.350387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:29392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.543 [2024-07-22 18:40:53.350408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:16.543 [2024-07-22 18:40:53.350438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:29400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.543 [2024-07-22 18:40:53.350459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:16.544 [2024-07-22 18:40:53.350489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:29408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.544 [2024-07-22 18:40:53.350510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:16.544 [2024-07-22 18:40:53.350540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:29416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.544 [2024-07-22 18:40:53.350561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:16.544 [2024-07-22 18:40:53.350606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:29424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.544 [2024-07-22 18:40:53.350627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:16.544 [2024-07-22 18:40:53.350656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:29432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.544 [2024-07-22 18:40:53.350677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:16.544 [2024-07-22 18:40:53.350706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:29440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.544 [2024-07-22 18:40:53.350727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:16.544 [2024-07-22 18:40:53.350756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:29448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.544 [2024-07-22 18:40:53.350777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:16.544 [2024-07-22 18:40:53.350806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:29456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.544 [2024-07-22 18:40:53.350834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:16.544 [2024-07-22 18:40:53.350898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:29464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.544 [2024-07-22 18:40:53.350922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:16.544 [2024-07-22 18:40:53.350952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:29472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.544 [2024-07-22 18:40:53.350974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:16.544 [2024-07-22 18:40:53.351005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:29480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.544 [2024-07-22 18:40:53.351026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:16.544 [2024-07-22 18:40:53.351057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:29488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.544 [2024-07-22 18:40:53.351078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:16.544 [2024-07-22 18:40:53.351108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:29496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.544 [2024-07-22 18:40:53.351130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:16.544 [2024-07-22 18:40:53.351160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:29504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.544 [2024-07-22 18:40:53.351182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:16.544 [2024-07-22 18:40:53.351213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:29512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.544 [2024-07-22 18:40:53.351249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:16.544 [2024-07-22 18:40:53.351278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:29520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.544 [2024-07-22 18:40:53.351300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:16.544 [2024-07-22 18:40:53.351330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:29528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.544 [2024-07-22 18:40:53.351350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:16.544 [2024-07-22 18:40:53.351379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:29536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.544 [2024-07-22 18:40:53.351400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:16.544 [2024-07-22 18:40:53.351429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:29544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.544 [2024-07-22 18:40:53.351450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:16.544 [2024-07-22 18:40:53.351479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:29552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.544 [2024-07-22 18:40:53.351501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:16.544 [2024-07-22 18:40:53.351543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.544 [2024-07-22 18:40:53.351566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:16.544 [2024-07-22 18:40:53.352611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:29568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.544 [2024-07-22 18:40:53.352648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:16.544 [2024-07-22 18:40:53.352687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:28736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.544 [2024-07-22 18:40:53.352727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:16.544 [2024-07-22 18:40:53.352759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:28744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.544 [2024-07-22 18:40:53.352781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:16.544 [2024-07-22 18:40:53.352812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:28752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.544 [2024-07-22 18:40:53.352834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:16.544 [2024-07-22 18:40:53.352879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:29664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.544 [2024-07-22 18:40:53.352904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:16.544 [2024-07-22 18:40:53.352936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:29672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.544 [2024-07-22 18:40:53.352959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:16.544 [2024-07-22 18:40:53.352990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.544 [2024-07-22 18:40:53.353011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:16.544 [2024-07-22 18:40:53.353042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.544 [2024-07-22 18:40:53.353063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:16.544 [2024-07-22 18:40:53.353094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:29696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.544 [2024-07-22 18:40:53.353115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:16.544 [2024-07-22 18:40:53.353145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:29704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.544 [2024-07-22 18:40:53.353166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:16.544 [2024-07-22 18:40:53.353196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:29712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.544 [2024-07-22 18:40:53.353218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:16.544 [2024-07-22 18:40:53.353274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:29720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.544 [2024-07-22 18:40:53.353297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:16.544 [2024-07-22 18:40:53.353326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.544 [2024-07-22 18:40:53.353347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:16.544 [2024-07-22 18:40:53.353393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:29736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.544 [2024-07-22 18:40:53.353415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:16.544 [2024-07-22 18:40:53.353445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:29744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.544 [2024-07-22 18:40:53.353466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:16.544 [2024-07-22 18:40:53.353496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.544 [2024-07-22 18:40:53.353517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:16.544 [2024-07-22 18:40:53.353548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:28760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.544 [2024-07-22 18:40:53.353569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:16.544 [2024-07-22 18:40:53.353599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:29576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.544 [2024-07-22 18:40:53.353621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:16.544 [2024-07-22 18:40:53.353652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:29584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.544 [2024-07-22 18:40:53.353673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:16.545 [2024-07-22 18:40:53.353703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:29592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.545 [2024-07-22 18:40:53.353725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:16.545 [2024-07-22 18:40:53.353755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:29600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.545 [2024-07-22 18:40:53.353776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:16.545 [2024-07-22 18:40:53.353807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:29608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.545 [2024-07-22 18:40:53.353843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:16.545 [2024-07-22 18:40:53.353901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:29616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.545 [2024-07-22 18:40:53.353927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:16.545 [2024-07-22 18:40:53.353959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:29624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.545 [2024-07-22 18:40:53.353991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:16.545 [2024-07-22 18:40:53.354040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:29632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.545 [2024-07-22 18:40:53.354063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:16.545 [2024-07-22 18:40:53.354094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:29640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.545 [2024-07-22 18:40:53.354115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:16.545 [2024-07-22 18:40:53.354146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:29648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.545 [2024-07-22 18:40:53.354167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:16.545 [2024-07-22 18:40:53.354198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.545 [2024-07-22 18:40:53.354220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:16.545 [2024-07-22 18:40:53.354250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:28768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.545 [2024-07-22 18:40:53.354271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:16.545 [2024-07-22 18:40:53.354302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:28776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.545 [2024-07-22 18:40:53.354324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:16.545 [2024-07-22 18:40:53.354362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:28784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.545 [2024-07-22 18:40:53.354383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:16.545 [2024-07-22 18:40:53.354432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:28792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.545 [2024-07-22 18:40:53.354455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:16.545 [2024-07-22 18:40:53.354486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:28800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.545 [2024-07-22 18:40:53.354508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:16.545 [2024-07-22 18:40:53.354538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:28808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.545 [2024-07-22 18:40:53.354559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:16.545 [2024-07-22 18:40:53.354604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:28816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.545 [2024-07-22 18:40:53.354633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:16.545 [2024-07-22 18:40:53.354681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:28824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.545 [2024-07-22 18:40:53.354711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:16.545 [2024-07-22 18:40:53.354743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:28832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.545 [2024-07-22 18:40:53.354765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:16.545 [2024-07-22 18:40:53.354796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:28840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.545 [2024-07-22 18:40:53.354817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:16.545 [2024-07-22 18:40:53.354847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:28848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.545 [2024-07-22 18:40:53.354883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:16.545 [2024-07-22 18:40:53.354918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:28856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.545 [2024-07-22 18:40:53.354940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:16.545 [2024-07-22 18:40:53.354971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:28864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.545 [2024-07-22 18:40:53.354992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:16.545 [2024-07-22 18:40:53.355022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:28872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.545 [2024-07-22 18:40:53.355043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:16.545 [2024-07-22 18:40:53.355073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:28880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.545 [2024-07-22 18:40:53.355095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:16.545 [2024-07-22 18:40:53.355125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:28888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.545 [2024-07-22 18:40:53.355146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:16.545 [2024-07-22 18:40:53.355177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:28896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.545 [2024-07-22 18:40:53.355199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:16.545 [2024-07-22 18:40:53.355244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:28904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.545 [2024-07-22 18:40:53.355281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:16.545 [2024-07-22 18:40:53.355312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.545 [2024-07-22 18:40:53.355333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:16.545 [2024-07-22 18:40:53.355363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:28920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.545 [2024-07-22 18:40:53.355384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:16.545 [2024-07-22 18:40:53.355425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:28928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.545 [2024-07-22 18:40:53.355448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:16.545 [2024-07-22 18:40:53.355479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.545 [2024-07-22 18:40:53.355500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:16.545 [2024-07-22 18:40:53.355544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:28944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.545 [2024-07-22 18:40:53.355565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:16.545 [2024-07-22 18:40:53.355596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:28952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.545 [2024-07-22 18:40:53.355617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:16.545 [2024-07-22 18:40:53.355647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:28960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.545 [2024-07-22 18:40:53.355669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:16.545 [2024-07-22 18:40:53.355714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:28968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.545 [2024-07-22 18:40:53.355735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:16.545 [2024-07-22 18:40:53.355764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.545 [2024-07-22 18:40:53.355785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:16.545 [2024-07-22 18:40:53.355815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:28984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.545 [2024-07-22 18:40:53.355836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:16.545 [2024-07-22 18:40:53.356639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:28992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.545 [2024-07-22 18:40:53.356685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:16.546 [2024-07-22 18:40:53.356761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:29000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.546 [2024-07-22 18:40:53.356785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:16.546 [2024-07-22 18:40:53.356816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:29008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.546 [2024-07-22 18:40:53.356838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:16.546 [2024-07-22 18:40:53.356899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:29016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.546 [2024-07-22 18:40:53.356924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:16.546 [2024-07-22 18:40:53.356969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:29024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.546 [2024-07-22 18:40:53.356992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:16.546 [2024-07-22 18:40:53.357023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:29032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.546 [2024-07-22 18:40:53.357044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:16.546 [2024-07-22 18:40:53.357074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:29040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.546 [2024-07-22 18:40:53.357095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:16.546 [2024-07-22 18:40:53.357126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:29048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.546 [2024-07-22 18:40:53.357148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:16.546 [2024-07-22 18:40:53.357179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:29056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.546 [2024-07-22 18:40:53.357200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:16.546 [2024-07-22 18:40:53.357231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:29064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.546 [2024-07-22 18:40:53.357253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:16.546 [2024-07-22 18:40:53.357283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:29072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.546 [2024-07-22 18:40:53.357305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:16.546 [2024-07-22 18:40:53.357335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:29080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.546 [2024-07-22 18:40:53.357356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:16.546 [2024-07-22 18:40:53.357388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:29088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.546 [2024-07-22 18:40:53.357409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:16.546 [2024-07-22 18:40:53.357440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:29096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.546 [2024-07-22 18:40:53.357462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:16.546 [2024-07-22 18:40:53.357496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:29104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.546 [2024-07-22 18:40:53.357518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:16.546 [2024-07-22 18:40:53.357549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:29112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.546 [2024-07-22 18:40:53.357570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:16.546 [2024-07-22 18:40:53.357601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:29120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.546 [2024-07-22 18:40:53.357632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:16.546 [2024-07-22 18:40:53.357664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:29128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.546 [2024-07-22 18:40:53.357686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:16.546 [2024-07-22 18:40:53.357718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:29136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.546 [2024-07-22 18:40:53.357739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:16.546 [2024-07-22 18:40:53.357770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:29144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.546 [2024-07-22 18:40:53.357791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:16.546 [2024-07-22 18:40:53.357822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:29152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.546 [2024-07-22 18:40:53.357861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:16.546 [2024-07-22 18:40:53.357895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:29160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.546 [2024-07-22 18:40:53.357917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:16.546 [2024-07-22 18:40:53.357949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:29168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.546 [2024-07-22 18:40:53.357970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:16.546 [2024-07-22 18:40:53.358023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:29176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.546 [2024-07-22 18:40:53.358048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.546 [2024-07-22 18:40:53.358080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:29184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.546 [2024-07-22 18:40:53.358102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:16.546 [2024-07-22 18:40:53.358133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:29192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.546 [2024-07-22 18:40:53.358154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:16.546 [2024-07-22 18:40:53.358186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:29200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.546 [2024-07-22 18:40:53.358207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:16.546 [2024-07-22 18:40:53.358238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:29208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.546 [2024-07-22 18:40:53.358259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:16.546 [2024-07-22 18:40:53.358290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:29216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.546 [2024-07-22 18:40:53.358320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:16.546 [2024-07-22 18:40:53.358353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:29224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.546 [2024-07-22 18:40:53.358375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:16.546 [2024-07-22 18:40:53.358406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:29232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.546 [2024-07-22 18:40:53.358427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:16.546 [2024-07-22 18:40:53.358459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:29240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.546 [2024-07-22 18:40:53.358480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:16.546 [2024-07-22 18:40:53.358510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:29248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.546 [2024-07-22 18:40:53.358532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:16.546 [2024-07-22 18:40:53.358581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:29256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.546 [2024-07-22 18:40:53.358602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:16.546 [2024-07-22 18:40:53.358632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:29264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.546 [2024-07-22 18:40:53.358653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:16.547 [2024-07-22 18:40:53.358688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:29272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.547 [2024-07-22 18:40:53.358708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:16.547 [2024-07-22 18:40:53.358739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:29280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.547 [2024-07-22 18:40:53.358760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:16.547 [2024-07-22 18:40:53.358789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:29288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.547 [2024-07-22 18:40:53.358809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:16.547 [2024-07-22 18:40:53.358840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:29296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.547 [2024-07-22 18:40:53.358893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:16.547 [2024-07-22 18:40:53.358948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:29304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.547 [2024-07-22 18:40:53.358970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:16.547 [2024-07-22 18:40:53.359002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:29312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.547 [2024-07-22 18:40:53.359024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:16.547 [2024-07-22 18:40:53.359072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:29320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.547 [2024-07-22 18:40:53.359095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:16.547 [2024-07-22 18:40:53.359127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:29328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.547 [2024-07-22 18:40:53.359149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:16.547 [2024-07-22 18:40:53.359180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:29336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.547 [2024-07-22 18:40:53.359201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:16.547 [2024-07-22 18:40:53.359248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:29344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.547 [2024-07-22 18:40:53.359286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:16.547 [2024-07-22 18:40:53.359317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:29352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.547 [2024-07-22 18:40:53.359338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:16.547 [2024-07-22 18:40:53.359369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:29360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.547 [2024-07-22 18:40:53.359391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:16.547 [2024-07-22 18:40:53.359421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.547 [2024-07-22 18:40:53.359442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:16.547 [2024-07-22 18:40:53.359473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:29376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.547 [2024-07-22 18:40:53.359494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:16.547 [2024-07-22 18:40:53.359525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:29384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.547 [2024-07-22 18:40:53.359546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:16.547 [2024-07-22 18:40:53.359577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:29392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.547 [2024-07-22 18:40:53.359598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:16.547 [2024-07-22 18:40:53.359632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:29400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.547 [2024-07-22 18:40:53.359653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:16.547 [2024-07-22 18:40:53.359684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:29408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.547 [2024-07-22 18:40:53.359720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:16.547 [2024-07-22 18:40:53.359759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:29416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.547 [2024-07-22 18:40:53.359782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:16.547 [2024-07-22 18:40:53.359812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:29424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.547 [2024-07-22 18:40:53.359833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:16.547 [2024-07-22 18:40:53.359881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:29432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.547 [2024-07-22 18:40:53.359919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:16.547 [2024-07-22 18:40:53.359954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:29440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.547 [2024-07-22 18:40:53.359977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:16.547 [2024-07-22 18:40:53.360008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:29448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.547 [2024-07-22 18:40:53.360030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:16.547 [2024-07-22 18:40:53.360061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:29456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.547 [2024-07-22 18:40:53.360082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:16.547 [2024-07-22 18:40:53.360114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:29464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.547 [2024-07-22 18:40:53.360135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:16.547 [2024-07-22 18:40:53.360166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:29472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.547 [2024-07-22 18:40:53.360187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:16.547 [2024-07-22 18:40:53.360233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:29480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.547 [2024-07-22 18:40:53.360271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:16.547 [2024-07-22 18:40:53.360302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:29488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.547 [2024-07-22 18:40:53.360323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:16.547 [2024-07-22 18:40:53.360355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:29496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.547 [2024-07-22 18:40:53.360376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:16.547 [2024-07-22 18:40:53.360408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:29504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.547 [2024-07-22 18:40:53.360429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:16.547 [2024-07-22 18:40:53.360460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:29512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.547 [2024-07-22 18:40:53.360491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:16.547 [2024-07-22 18:40:53.360524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:29520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.547 [2024-07-22 18:40:53.360546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:16.547 [2024-07-22 18:40:53.360577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:29528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.547 [2024-07-22 18:40:53.360598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:16.547 [2024-07-22 18:40:53.360644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:29536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.547 [2024-07-22 18:40:53.360665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:16.547 [2024-07-22 18:40:53.360695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:29544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.547 [2024-07-22 18:40:53.360716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:16.547 [2024-07-22 18:40:53.360747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:29552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.547 [2024-07-22 18:40:53.360768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:16.547 [2024-07-22 18:40:53.361158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:29560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.547 [2024-07-22 18:40:53.361204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:16.547 [2024-07-22 18:40:53.361280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:29568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.547 [2024-07-22 18:40:53.361318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:16.547 [2024-07-22 18:40:53.361361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:28736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.548 [2024-07-22 18:40:53.361384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:16.548 [2024-07-22 18:40:53.361421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:28744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.548 [2024-07-22 18:40:53.361448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:16.548 [2024-07-22 18:40:53.361487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:28752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.548 [2024-07-22 18:40:53.361514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:16.548 [2024-07-22 18:40:53.361553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:29664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.548 [2024-07-22 18:40:53.361580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:16.548 [2024-07-22 18:40:53.361634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:29672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.548 [2024-07-22 18:40:53.361670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:16.548 [2024-07-22 18:40:53.361715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:29680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.548 [2024-07-22 18:40:53.361738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:16.548 [2024-07-22 18:40:53.361774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:29688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.548 [2024-07-22 18:40:53.361800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:16.548 [2024-07-22 18:40:53.361869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:29696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.548 [2024-07-22 18:40:53.361895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:16.548 [2024-07-22 18:40:53.361933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:29704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.548 [2024-07-22 18:40:53.361955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:16.548 [2024-07-22 18:40:53.361992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:29712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.548 [2024-07-22 18:40:53.362028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:16.548 [2024-07-22 18:40:53.362072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:29720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.548 [2024-07-22 18:40:53.362099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:16.548 [2024-07-22 18:40:53.362138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:29728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.548 [2024-07-22 18:40:53.362161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:16.548 [2024-07-22 18:40:53.362197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:29736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.548 [2024-07-22 18:40:53.362223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:16.548 [2024-07-22 18:40:53.362262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:29744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.548 [2024-07-22 18:40:53.362289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:16.548 [2024-07-22 18:40:53.362328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:29752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.548 [2024-07-22 18:40:53.362355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:16.548 [2024-07-22 18:40:53.362394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:28760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.548 [2024-07-22 18:40:53.362416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:16.548 [2024-07-22 18:40:53.362453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.548 [2024-07-22 18:40:53.362478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:16.548 [2024-07-22 18:40:53.362533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:29584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.548 [2024-07-22 18:40:53.362578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:16.548 [2024-07-22 18:40:53.362615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:29592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.548 [2024-07-22 18:40:53.362642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:16.548 [2024-07-22 18:40:53.362679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:29600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.548 [2024-07-22 18:40:53.362701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:16.548 [2024-07-22 18:40:53.362736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:29608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.548 [2024-07-22 18:40:53.362762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:16.548 [2024-07-22 18:40:53.362800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:29616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.548 [2024-07-22 18:40:53.362822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:16.548 [2024-07-22 18:40:53.362896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:29624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.548 [2024-07-22 18:40:53.362921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:16.548 [2024-07-22 18:40:53.362959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:29632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.548 [2024-07-22 18:40:53.362985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:16.548 [2024-07-22 18:40:53.363023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:29640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.548 [2024-07-22 18:40:53.363045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:16.548 [2024-07-22 18:40:53.363082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:29648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.548 [2024-07-22 18:40:53.363108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:16.548 [2024-07-22 18:40:53.363147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.548 [2024-07-22 18:40:53.363173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:16.548 [2024-07-22 18:40:53.363212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:28768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.548 [2024-07-22 18:40:53.363270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:16.548 [2024-07-22 18:40:53.363309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:28776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.548 [2024-07-22 18:40:53.363336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:16.548 [2024-07-22 18:40:53.363384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.548 [2024-07-22 18:40:53.363412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:16.548 [2024-07-22 18:40:53.363475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:28792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.548 [2024-07-22 18:40:53.363504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:16.548 [2024-07-22 18:40:53.363551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:28800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.548 [2024-07-22 18:40:53.363577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:16.548 [2024-07-22 18:40:53.363616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:28808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.548 [2024-07-22 18:40:53.363638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:16.548 [2024-07-22 18:40:53.363689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:28816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.548 [2024-07-22 18:40:53.363714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:16.548 [2024-07-22 18:40:53.363751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:28824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.548 [2024-07-22 18:40:53.363773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:16.548 [2024-07-22 18:40:53.363808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:28832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.548 [2024-07-22 18:40:53.363830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:16.548 [2024-07-22 18:40:53.363901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:28840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.548 [2024-07-22 18:40:53.363924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:16.548 [2024-07-22 18:40:53.363962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:28848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.548 [2024-07-22 18:40:53.363984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:16.548 [2024-07-22 18:40:53.364021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:28856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.548 [2024-07-22 18:40:53.364043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:16.548 [2024-07-22 18:40:53.364079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:28864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.549 [2024-07-22 18:40:53.364101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:16.549 [2024-07-22 18:40:53.364138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:28872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.549 [2024-07-22 18:40:53.364159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:16.549 [2024-07-22 18:40:53.364197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:28880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.549 [2024-07-22 18:40:53.364243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:16.549 [2024-07-22 18:40:53.364299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:28888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.549 [2024-07-22 18:40:53.364326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:16.549 [2024-07-22 18:40:53.364365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:28896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.549 [2024-07-22 18:40:53.364388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:16.549 [2024-07-22 18:40:53.364425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.549 [2024-07-22 18:40:53.364451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:16.549 [2024-07-22 18:40:53.364491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:28912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.549 [2024-07-22 18:40:53.364514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:16.549 [2024-07-22 18:40:53.364551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:28920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.549 [2024-07-22 18:40:53.364573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:16.549 [2024-07-22 18:40:53.364611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.549 [2024-07-22 18:40:53.364632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:16.549 [2024-07-22 18:40:53.364683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:28936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.549 [2024-07-22 18:40:53.364704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:16.549 [2024-07-22 18:40:53.364739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:28944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.549 [2024-07-22 18:40:53.364760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:16.549 [2024-07-22 18:40:53.364796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:28952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.549 [2024-07-22 18:40:53.364822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:16.549 [2024-07-22 18:40:53.364891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:28960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.549 [2024-07-22 18:40:53.364915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:16.549 [2024-07-22 18:40:53.364952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:28968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.549 [2024-07-22 18:40:53.364978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:16.549 [2024-07-22 18:40:53.365026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:28976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.549 [2024-07-22 18:40:53.365063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:16.549 [2024-07-22 18:40:53.365279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:28984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.549 [2024-07-22 18:40:53.365323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:16.549 [2024-07-22 18:41:06.787688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:83976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.549 [2024-07-22 18:41:06.787822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:16.549 [2024-07-22 18:41:06.787966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:84544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.549 [2024-07-22 18:41:06.788000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:16.549 [2024-07-22 18:41:06.788055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:84552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.549 [2024-07-22 18:41:06.788080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:16.549 [2024-07-22 18:41:06.788116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:84560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.549 [2024-07-22 18:41:06.788139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:16.549 [2024-07-22 18:41:06.788174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:84568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.549 [2024-07-22 18:41:06.788196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:16.549 [2024-07-22 18:41:06.788231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:84576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.549 [2024-07-22 18:41:06.788254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:16.549 [2024-07-22 18:41:06.788289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:84584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.549 [2024-07-22 18:41:06.788311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:16.549 [2024-07-22 18:41:06.788932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:84592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.549 [2024-07-22 18:41:06.788970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:16.549 [2024-07-22 18:41:06.790206] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:16.549 [2024-07-22 18:41:06.790257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.549 [2024-07-22 18:41:06.790285] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:16.549 [2024-07-22 18:41:06.790306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.549 [2024-07-22 18:41:06.790327] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:16.549 [2024-07-22 18:41:06.790346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.549 [2024-07-22 18:41:06.790395] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:16.549 [2024-07-22 18:41:06.790417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.549 [2024-07-22 18:41:06.790440] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:0014000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.549 [2024-07-22 18:41:06.790461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.549 [2024-07-22 18:41:06.790525] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ad80 is same with the state(5) to be set 00:34:16.549 [2024-07-22 18:41:06.790854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:83744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.549 [2024-07-22 18:41:06.790904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.549 [2024-07-22 18:41:06.790943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:83752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.549 [2024-07-22 18:41:06.790965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.549 [2024-07-22 18:41:06.790988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:83760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.549 [2024-07-22 18:41:06.791009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.549 [2024-07-22 18:41:06.791032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:83768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.549 [2024-07-22 18:41:06.791052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.549 [2024-07-22 18:41:06.791074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:83776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.549 [2024-07-22 18:41:06.791094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.549 [2024-07-22 18:41:06.791117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:83784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.549 [2024-07-22 18:41:06.791137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.549 [2024-07-22 18:41:06.791160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:83792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.549 [2024-07-22 18:41:06.791179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.549 [2024-07-22 18:41:06.791201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:83800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.549 [2024-07-22 18:41:06.791220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.549 [2024-07-22 18:41:06.791244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:83808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.549 [2024-07-22 18:41:06.791264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.549 [2024-07-22 18:41:06.791288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:83816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.549 [2024-07-22 18:41:06.791307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.550 [2024-07-22 18:41:06.791350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:83824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.550 [2024-07-22 18:41:06.791374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.550 [2024-07-22 18:41:06.791398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:83832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.550 [2024-07-22 18:41:06.791419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.550 [2024-07-22 18:41:06.791443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:83840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.550 [2024-07-22 18:41:06.791463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.550 [2024-07-22 18:41:06.791486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:83848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.550 [2024-07-22 18:41:06.791506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.550 [2024-07-22 18:41:06.791529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:83856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.550 [2024-07-22 18:41:06.791550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.550 [2024-07-22 18:41:06.791572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:83864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.550 [2024-07-22 18:41:06.791604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.550 [2024-07-22 18:41:06.791627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:83872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.550 [2024-07-22 18:41:06.791647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.550 [2024-07-22 18:41:06.791670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:83880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.550 [2024-07-22 18:41:06.791690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.550 [2024-07-22 18:41:06.791715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:83888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.550 [2024-07-22 18:41:06.791735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.550 [2024-07-22 18:41:06.791758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:83896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.550 [2024-07-22 18:41:06.791778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.550 [2024-07-22 18:41:06.791802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:83904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.550 [2024-07-22 18:41:06.791822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.550 [2024-07-22 18:41:06.791862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:83912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.550 [2024-07-22 18:41:06.791884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.550 [2024-07-22 18:41:06.791907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:83920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.550 [2024-07-22 18:41:06.791937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.550 [2024-07-22 18:41:06.791961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:83928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.550 [2024-07-22 18:41:06.791982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.550 [2024-07-22 18:41:06.792005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:83936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.550 [2024-07-22 18:41:06.792025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.550 [2024-07-22 18:41:06.792048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:83944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.550 [2024-07-22 18:41:06.792096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.550 [2024-07-22 18:41:06.792121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:83952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.550 [2024-07-22 18:41:06.792142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.550 [2024-07-22 18:41:06.792165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:83960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.550 [2024-07-22 18:41:06.792186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.550 [2024-07-22 18:41:06.792209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:83968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.550 [2024-07-22 18:41:06.792229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.550 [2024-07-22 18:41:06.792252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:84600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.550 [2024-07-22 18:41:06.792272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.550 [2024-07-22 18:41:06.792296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:84608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.550 [2024-07-22 18:41:06.792317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.550 [2024-07-22 18:41:06.792339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:84616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.550 [2024-07-22 18:41:06.792360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.550 [2024-07-22 18:41:06.792384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:84624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.550 [2024-07-22 18:41:06.792404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.550 [2024-07-22 18:41:06.792428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:84632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.550 [2024-07-22 18:41:06.792448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.550 [2024-07-22 18:41:06.792471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:84640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.550 [2024-07-22 18:41:06.792492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.550 [2024-07-22 18:41:06.792528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:83984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.550 [2024-07-22 18:41:06.792550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.550 [2024-07-22 18:41:06.792574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:83992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.550 [2024-07-22 18:41:06.792595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.550 [2024-07-22 18:41:06.792620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:84000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.550 [2024-07-22 18:41:06.792640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.550 [2024-07-22 18:41:06.792664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:84008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.550 [2024-07-22 18:41:06.792684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.550 [2024-07-22 18:41:06.792709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:84016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.550 [2024-07-22 18:41:06.792729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.550 [2024-07-22 18:41:06.792753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:84024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.550 [2024-07-22 18:41:06.792774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.550 [2024-07-22 18:41:06.792797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:84032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.550 [2024-07-22 18:41:06.792817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.550 [2024-07-22 18:41:06.792872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:84040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.550 [2024-07-22 18:41:06.792896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.550 [2024-07-22 18:41:06.792919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:84048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.550 [2024-07-22 18:41:06.792939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.551 [2024-07-22 18:41:06.792962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:84056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.551 [2024-07-22 18:41:06.792983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.551 [2024-07-22 18:41:06.793006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:84064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.551 [2024-07-22 18:41:06.793027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.551 [2024-07-22 18:41:06.793051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:84072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.551 [2024-07-22 18:41:06.793071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.551 [2024-07-22 18:41:06.793094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:84080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.551 [2024-07-22 18:41:06.793114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.551 [2024-07-22 18:41:06.793146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:84088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.551 [2024-07-22 18:41:06.793168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.551 [2024-07-22 18:41:06.793190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:84096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.551 [2024-07-22 18:41:06.793211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.551 [2024-07-22 18:41:06.793234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:84104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.551 [2024-07-22 18:41:06.793254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.551 [2024-07-22 18:41:06.793277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:84112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.551 [2024-07-22 18:41:06.793298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.551 [2024-07-22 18:41:06.793323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:84120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.551 [2024-07-22 18:41:06.793345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.551 [2024-07-22 18:41:06.793367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:84128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.551 [2024-07-22 18:41:06.793387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.551 [2024-07-22 18:41:06.793411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:84136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.551 [2024-07-22 18:41:06.793431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.551 [2024-07-22 18:41:06.793453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:84144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.551 [2024-07-22 18:41:06.793473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.551 [2024-07-22 18:41:06.793497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:84152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.551 [2024-07-22 18:41:06.793516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.551 [2024-07-22 18:41:06.793539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:84160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.551 [2024-07-22 18:41:06.793560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.551 [2024-07-22 18:41:06.793589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:84168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.551 [2024-07-22 18:41:06.793610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.551 [2024-07-22 18:41:06.793632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:84176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.551 [2024-07-22 18:41:06.793652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.551 [2024-07-22 18:41:06.793676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:84184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.551 [2024-07-22 18:41:06.793704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.551 [2024-07-22 18:41:06.793728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:84192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.551 [2024-07-22 18:41:06.793749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.551 [2024-07-22 18:41:06.793772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:84200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.551 [2024-07-22 18:41:06.793792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.551 [2024-07-22 18:41:06.793815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:84208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.551 [2024-07-22 18:41:06.793853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.551 [2024-07-22 18:41:06.793881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:84216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.551 [2024-07-22 18:41:06.793902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.551 [2024-07-22 18:41:06.793924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:84224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.551 [2024-07-22 18:41:06.793945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.551 [2024-07-22 18:41:06.793969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:84232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.551 [2024-07-22 18:41:06.793989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.551 [2024-07-22 18:41:06.794042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:84240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.551 [2024-07-22 18:41:06.794065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.551 [2024-07-22 18:41:06.794088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:84248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.551 [2024-07-22 18:41:06.794108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.551 [2024-07-22 18:41:06.794131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:84256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.551 [2024-07-22 18:41:06.794151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.551 [2024-07-22 18:41:06.794174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:84264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.551 [2024-07-22 18:41:06.794195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.551 [2024-07-22 18:41:06.794218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:84272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.551 [2024-07-22 18:41:06.794238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.551 [2024-07-22 18:41:06.794261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:84280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.551 [2024-07-22 18:41:06.794280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.551 [2024-07-22 18:41:06.794312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:84288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.551 [2024-07-22 18:41:06.794334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.551 [2024-07-22 18:41:06.794365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:84296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.551 [2024-07-22 18:41:06.794386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.551 [2024-07-22 18:41:06.794410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:84304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.551 [2024-07-22 18:41:06.794435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.551 [2024-07-22 18:41:06.794458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:84312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.551 [2024-07-22 18:41:06.794480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.551 [2024-07-22 18:41:06.794503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:84320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.551 [2024-07-22 18:41:06.794523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.551 [2024-07-22 18:41:06.794545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:84328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.551 [2024-07-22 18:41:06.794565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.551 [2024-07-22 18:41:06.794588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:84336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.551 [2024-07-22 18:41:06.794608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.551 [2024-07-22 18:41:06.794632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:84344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.551 [2024-07-22 18:41:06.794651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.551 [2024-07-22 18:41:06.794674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:84352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.551 [2024-07-22 18:41:06.794694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.551 [2024-07-22 18:41:06.794718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:84360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.551 [2024-07-22 18:41:06.794740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.551 [2024-07-22 18:41:06.794763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:84368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.551 [2024-07-22 18:41:06.794784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.552 [2024-07-22 18:41:06.794807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:84376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.552 [2024-07-22 18:41:06.794827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.552 [2024-07-22 18:41:06.794866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:84384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.552 [2024-07-22 18:41:06.794901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.552 [2024-07-22 18:41:06.794925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:84392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.552 [2024-07-22 18:41:06.794946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.552 [2024-07-22 18:41:06.794969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:84400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.552 [2024-07-22 18:41:06.794990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.552 [2024-07-22 18:41:06.795013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:84408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.552 [2024-07-22 18:41:06.795034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.552 [2024-07-22 18:41:06.795057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:84416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.552 [2024-07-22 18:41:06.795108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.552 [2024-07-22 18:41:06.795135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:84424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.552 [2024-07-22 18:41:06.795156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.552 [2024-07-22 18:41:06.795179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:84432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.552 [2024-07-22 18:41:06.795199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.552 [2024-07-22 18:41:06.795224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:84440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.552 [2024-07-22 18:41:06.795245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.552 [2024-07-22 18:41:06.795270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:84648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.552 [2024-07-22 18:41:06.795290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.552 [2024-07-22 18:41:06.795313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:84656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.552 [2024-07-22 18:41:06.795334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.552 [2024-07-22 18:41:06.795357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:84664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.552 [2024-07-22 18:41:06.795378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.552 [2024-07-22 18:41:06.795402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:84672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.552 [2024-07-22 18:41:06.795422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.552 [2024-07-22 18:41:06.795445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:84680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.552 [2024-07-22 18:41:06.795466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.552 [2024-07-22 18:41:06.795503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.552 [2024-07-22 18:41:06.795526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.552 [2024-07-22 18:41:06.795550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:84448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.552 [2024-07-22 18:41:06.795571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.552 [2024-07-22 18:41:06.795605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:84456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.552 [2024-07-22 18:41:06.795625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.552 [2024-07-22 18:41:06.795649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:84464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.552 [2024-07-22 18:41:06.795669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.552 [2024-07-22 18:41:06.795692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:84472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.552 [2024-07-22 18:41:06.795713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.552 [2024-07-22 18:41:06.795737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.552 [2024-07-22 18:41:06.795757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.552 [2024-07-22 18:41:06.795780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:84488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.552 [2024-07-22 18:41:06.795801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.552 [2024-07-22 18:41:06.795824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:84496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.552 [2024-07-22 18:41:06.795862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.552 [2024-07-22 18:41:06.795887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:84504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.552 [2024-07-22 18:41:06.795907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.552 [2024-07-22 18:41:06.795930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:84512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.552 [2024-07-22 18:41:06.795952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.552 [2024-07-22 18:41:06.795975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:84520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.552 [2024-07-22 18:41:06.795996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.552 [2024-07-22 18:41:06.796019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:84528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.552 [2024-07-22 18:41:06.796039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.552 [2024-07-22 18:41:06.796063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:84536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.552 [2024-07-22 18:41:06.796083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.552 [2024-07-22 18:41:06.796116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.552 [2024-07-22 18:41:06.796138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.552 [2024-07-22 18:41:06.796161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:84544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.552 [2024-07-22 18:41:06.796182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.552 [2024-07-22 18:41:06.796205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:84552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.552 [2024-07-22 18:41:06.796226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.552 [2024-07-22 18:41:06.796250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:84560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.552 [2024-07-22 18:41:06.796271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.552 [2024-07-22 18:41:06.796295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:84568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.552 [2024-07-22 18:41:06.796316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.552 [2024-07-22 18:41:06.796339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:84576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.552 [2024-07-22 18:41:06.796360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.552 [2024-07-22 18:41:06.796383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:84584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.552 [2024-07-22 18:41:06.796403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.552 [2024-07-22 18:41:06.796427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:84592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.552 [2024-07-22 18:41:06.796447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.552 [2024-07-22 18:41:06.796744] bdev_nvme.c:7811:bdev_nvme_readv: *ERROR*: readv failed: rc = -6 00:34:16.552 [2024-07-22 18:41:06.796829] bdev_nvme.c:7811:bdev_nvme_readv: *ERROR*: readv failed: rc = -6 00:34:16.552 [2024-07-22 18:41:06.796903] bdev_nvme.c:7811:bdev_nvme_readv: *ERROR*: readv failed: rc = -6 00:34:16.552 [2024-07-22 18:41:06.796958] bdev_nvme.c:7811:bdev_nvme_readv: *ERROR*: readv failed: rc = -6 00:34:16.552 [2024-07-22 18:41:06.797010] bdev_nvme.c:7811:bdev_nvme_readv: *ERROR*: readv failed: rc = -6 00:34:16.552 [2024-07-22 18:41:06.797063] bdev_nvme.c:7811:bdev_nvme_readv: *ERROR*: readv failed: rc = -6 00:34:16.552 [2024-07-22 18:41:06.797115] bdev_nvme.c:7811:bdev_nvme_readv: *ERROR*: readv failed: rc = -6 00:34:16.552 [2024-07-22 18:41:06.797167] bdev_nvme.c:7811:bdev_nvme_readv: *ERROR*: readv failed: rc = -6 00:34:16.552 [2024-07-22 18:41:06.797220] bdev_nvme.c:7811:bdev_nvme_readv: *ERROR*: readv failed: rc = -6 00:34:16.552 [2024-07-22 18:41:06.797591] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x61500002b500 was disconnected and freed. reset controller. 00:34:16.552 [2024-07-22 18:41:06.799501] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.552 [2024-07-22 18:41:06.799621] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (9): Bad file descriptor 00:34:16.552 [2024-07-22 18:41:06.799888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.553 [2024-07-22 18:41:06.799965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002ad80 with addr=10.0.0.2, port=4421 00:34:16.553 [2024-07-22 18:41:06.799999] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ad80 is same with the state(5) to be set 00:34:16.553 [2024-07-22 18:41:06.800051] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (9): Bad file descriptor 00:34:16.553 [2024-07-22 18:41:06.800139] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.553 [2024-07-22 18:41:06.800177] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.553 [2024-07-22 18:41:06.800202] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.553 [2024-07-22 18:41:06.800260] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.553 [2024-07-22 18:41:06.800284] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.553 [2024-07-22 18:41:16.906030] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:34:16.553 Received shutdown signal, test time was about 55.734535 seconds 00:34:16.553 00:34:16.553 Latency(us) 00:34:16.553 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:16.553 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:34:16.553 Verification LBA range: start 0x0 length 0x4000 00:34:16.553 Nvme0n1 : 55.73 5292.96 20.68 0.00 0.00 24148.79 1995.87 7076934.75 00:34:16.553 =================================================================================================================== 00:34:16.553 Total : 5292.96 20.68 0.00 0.00 24148.79 1995.87 7076934.75 00:34:16.553 18:41:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:16.811 18:41:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:34:16.811 18:41:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:34:16.811 18:41:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:34:16.811 18:41:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:16.812 18:41:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@117 -- # sync 00:34:16.812 18:41:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:16.812 18:41:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@120 -- # set +e 00:34:16.812 18:41:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:16.812 18:41:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:16.812 rmmod nvme_tcp 00:34:16.812 rmmod nvme_fabrics 00:34:16.812 rmmod nvme_keyring 00:34:16.812 18:41:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:16.812 18:41:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@124 -- # set -e 00:34:16.812 18:41:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@125 -- # return 0 00:34:16.812 18:41:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@489 -- # '[' -n 106196 ']' 00:34:16.812 18:41:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@490 -- # killprocess 106196 00:34:16.812 18:41:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@948 -- # '[' -z 106196 ']' 00:34:16.812 18:41:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@952 -- # kill -0 106196 00:34:16.812 18:41:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@953 -- # uname 00:34:16.812 18:41:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:16.812 18:41:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 106196 00:34:17.070 18:41:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:34:17.070 18:41:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:34:17.070 killing process with pid 106196 00:34:17.070 18:41:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 106196' 00:34:17.070 18:41:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@967 -- # kill 106196 00:34:17.070 18:41:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # wait 106196 00:34:18.445 18:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:18.445 18:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:18.445 18:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:18.445 18:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:18.445 18:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:18.445 18:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:18.445 18:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:18.445 18:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:18.445 18:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:34:18.445 00:34:18.445 real 1m4.548s 00:34:18.445 user 3m2.436s 00:34:18.445 sys 0m12.989s 00:34:18.445 18:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:18.445 ************************************ 00:34:18.445 END TEST nvmf_host_multipath 00:34:18.445 18:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:34:18.445 ************************************ 00:34:18.721 18:41:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:34:18.721 18:41:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@43 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:34:18.721 18:41:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:34:18.721 18:41:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:18.721 18:41:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.721 ************************************ 00:34:18.721 START TEST nvmf_timeout 00:34:18.721 ************************************ 00:34:18.721 18:41:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:34:18.721 * Looking for test storage... 00:34:18.721 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:34:18.721 18:41:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:34:18.721 18:41:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:34:18.721 18:41:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:18.721 18:41:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:18.721 18:41:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:18.721 18:41:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:18.721 18:41:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:18.721 18:41:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:18.721 18:41:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:18.721 18:41:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:18.721 18:41:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:18.721 18:41:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:18.721 18:41:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:34:18.721 18:41:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=0b8484e2-e129-4a11-8748-0b3c728771da 00:34:18.721 18:41:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:18.721 18:41:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:18.721 18:41:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:34:18.721 18:41:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:18.721 18:41:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:34:18.721 18:41:30 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:18.721 18:41:30 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:18.721 18:41:30 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:18.721 18:41:30 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:18.721 18:41:30 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:18.721 18:41:30 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:18.721 18:41:30 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:34:18.721 18:41:30 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:18.721 18:41:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@47 -- # : 0 00:34:18.721 18:41:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:18.721 18:41:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:18.721 18:41:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:18.721 18:41:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:18.721 18:41:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:18.721 18:41:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:18.721 18:41:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:18.721 18:41:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:18.721 18:41:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:18.721 18:41:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:18.721 18:41:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:34:18.721 18:41:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:34:18.721 18:41:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:34:18.721 18:41:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:34:18.721 18:41:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:18.721 18:41:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:18.721 18:41:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:18.721 18:41:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:18.721 18:41:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:18.721 18:41:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:18.721 18:41:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:18.721 18:41:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:18.721 18:41:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:34:18.722 18:41:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:34:18.722 18:41:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:34:18.722 18:41:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:34:18.722 18:41:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:34:18.722 18:41:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@432 -- # nvmf_veth_init 00:34:18.722 18:41:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:18.722 18:41:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:18.722 18:41:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:34:18.722 18:41:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:34:18.722 18:41:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:34:18.722 18:41:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:34:18.722 18:41:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:34:18.722 18:41:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:18.722 18:41:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:34:18.722 18:41:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:34:18.722 18:41:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:34:18.722 18:41:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:34:18.722 18:41:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:34:18.722 18:41:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:34:18.722 Cannot find device "nvmf_tgt_br" 00:34:18.722 18:41:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@155 -- # true 00:34:18.722 18:41:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:34:18.722 Cannot find device "nvmf_tgt_br2" 00:34:18.722 18:41:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@156 -- # true 00:34:18.722 18:41:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:34:18.722 18:41:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:34:18.722 Cannot find device "nvmf_tgt_br" 00:34:18.722 18:41:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@158 -- # true 00:34:18.722 18:41:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:34:18.722 Cannot find device "nvmf_tgt_br2" 00:34:18.722 18:41:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@159 -- # true 00:34:18.722 18:41:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:34:18.722 18:41:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:34:18.722 18:41:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:34:18.980 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:34:18.980 18:41:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:34:18.980 18:41:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:34:18.980 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:34:18.980 18:41:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:34:18.980 18:41:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:34:18.980 18:41:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:34:18.980 18:41:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:34:18.980 18:41:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:34:18.981 18:41:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:34:18.981 18:41:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:34:18.981 18:41:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:34:18.981 18:41:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:34:18.981 18:41:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:34:18.981 18:41:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:34:18.981 18:41:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:34:18.981 18:41:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:34:18.981 18:41:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:34:18.981 18:41:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:34:18.981 18:41:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:34:18.981 18:41:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:34:18.981 18:41:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:34:18.981 18:41:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:34:18.981 18:41:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:34:18.981 18:41:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:34:18.981 18:41:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:34:18.981 18:41:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:34:18.981 18:41:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:34:18.981 18:41:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:34:18.981 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:18.981 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.105 ms 00:34:18.981 00:34:18.981 --- 10.0.0.2 ping statistics --- 00:34:18.981 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:18.981 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:34:18.981 18:41:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:34:18.981 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:34:18.981 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:34:18.981 00:34:18.981 --- 10.0.0.3 ping statistics --- 00:34:18.981 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:18.981 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:34:18.981 18:41:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:34:18.981 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:18.981 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:34:18.981 00:34:18.981 --- 10.0.0.1 ping statistics --- 00:34:18.981 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:18.981 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:34:18.981 18:41:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:18.981 18:41:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@433 -- # return 0 00:34:18.981 18:41:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:18.981 18:41:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:18.981 18:41:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:18.981 18:41:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:18.981 18:41:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:18.981 18:41:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:18.981 18:41:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:18.981 18:41:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:34:18.981 18:41:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:18.981 18:41:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:18.981 18:41:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:34:18.981 18:41:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@481 -- # nvmfpid=107553 00:34:18.981 18:41:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@482 -- # waitforlisten 107553 00:34:18.981 18:41:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 107553 ']' 00:34:18.981 18:41:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:18.981 18:41:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:18.981 18:41:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:34:18.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:18.981 18:41:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:18.981 18:41:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:18.981 18:41:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:34:19.239 [2024-07-22 18:41:31.090335] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:34:19.239 [2024-07-22 18:41:31.090520] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:19.497 [2024-07-22 18:41:31.273962] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:19.755 [2024-07-22 18:41:31.578580] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:19.755 [2024-07-22 18:41:31.578683] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:19.755 [2024-07-22 18:41:31.578702] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:19.755 [2024-07-22 18:41:31.578718] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:19.755 [2024-07-22 18:41:31.578730] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:19.755 [2024-07-22 18:41:31.578970] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:19.755 [2024-07-22 18:41:31.578986] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:20.013 18:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:20.013 18:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:34:20.013 18:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:20.013 18:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:20.013 18:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:34:20.013 18:41:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:20.013 18:41:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:20.013 18:41:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:34:20.579 [2024-07-22 18:41:32.293028] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:20.579 18:41:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:34:20.837 Malloc0 00:34:20.837 18:41:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:21.095 18:41:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:21.353 18:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:21.611 [2024-07-22 18:41:33.500289] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:21.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:21.611 18:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=107644 00:34:21.611 18:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:34:21.611 18:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 107644 /var/tmp/bdevperf.sock 00:34:21.611 18:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 107644 ']' 00:34:21.611 18:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:21.611 18:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:21.611 18:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:21.611 18:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:21.611 18:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:34:21.869 [2024-07-22 18:41:33.672915] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:34:21.869 [2024-07-22 18:41:33.673127] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107644 ] 00:34:21.869 [2024-07-22 18:41:33.838679] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:22.127 [2024-07-22 18:41:34.118292] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:34:22.693 18:41:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:22.693 18:41:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:34:22.693 18:41:34 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:34:22.951 18:41:34 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:34:23.517 NVMe0n1 00:34:23.517 18:41:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:34:23.517 18:41:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=107692 00:34:23.517 18:41:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:34:23.517 Running I/O for 10 seconds... 00:34:24.452 18:41:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:24.715 [2024-07-22 18:41:36.569211] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:34:24.715 [2024-07-22 18:41:36.569291] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:34:24.715 [2024-07-22 18:41:36.569309] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:34:24.715 [2024-07-22 18:41:36.569322] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:34:24.715 [2024-07-22 18:41:36.569335] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:34:24.715 [2024-07-22 18:41:36.569347] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:34:24.715 [2024-07-22 18:41:36.569362] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:34:24.715 [2024-07-22 18:41:36.569375] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:34:24.715 [2024-07-22 18:41:36.569393] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:34:24.715 [2024-07-22 18:41:36.569405] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:34:24.715 [2024-07-22 18:41:36.569417] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:34:24.715 [2024-07-22 18:41:36.569430] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:34:24.715 [2024-07-22 18:41:36.569442] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:34:24.715 [2024-07-22 18:41:36.569455] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:34:24.715 [2024-07-22 18:41:36.569468] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:34:24.715 [2024-07-22 18:41:36.569480] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:34:24.715 [2024-07-22 18:41:36.569491] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:34:24.715 [2024-07-22 18:41:36.569503] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:34:24.715 [2024-07-22 18:41:36.569516] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:34:24.715 [2024-07-22 18:41:36.569528] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:34:24.715 [2024-07-22 18:41:36.569540] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:34:24.715 [2024-07-22 18:41:36.569552] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:34:24.715 [2024-07-22 18:41:36.569565] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:34:24.715 [2024-07-22 18:41:36.569577] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:34:24.715 [2024-07-22 18:41:36.569589] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:34:24.715 [2024-07-22 18:41:36.569601] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:34:24.715 [2024-07-22 18:41:36.569612] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:34:24.715 [2024-07-22 18:41:36.569624] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:34:24.715 [2024-07-22 18:41:36.569636] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:34:24.715 [2024-07-22 18:41:36.569649] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:34:24.715 [2024-07-22 18:41:36.569661] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:34:24.715 [2024-07-22 18:41:36.569672] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:34:24.715 [2024-07-22 18:41:36.569684] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:34:24.715 [2024-07-22 18:41:36.569696] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:34:24.715 [2024-07-22 18:41:36.569708] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:34:24.715 [2024-07-22 18:41:36.569720] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:34:24.715 [2024-07-22 18:41:36.569732] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:34:24.715 [2024-07-22 18:41:36.569744] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:34:24.715 [2024-07-22 18:41:36.569756] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:34:24.715 [2024-07-22 18:41:36.569769] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:34:24.715 [2024-07-22 18:41:36.569781] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:34:24.715 [2024-07-22 18:41:36.569792] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:34:24.715 [2024-07-22 18:41:36.569805] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:34:24.715 [2024-07-22 18:41:36.569817] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:34:24.715 [2024-07-22 18:41:36.569829] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:34:24.715 [2024-07-22 18:41:36.569856] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:34:24.715 [2024-07-22 18:41:36.569868] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:34:24.715 [2024-07-22 18:41:36.569881] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:34:24.715 [2024-07-22 18:41:36.569903] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:34:24.715 [2024-07-22 18:41:36.571086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:60904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.715 [2024-07-22 18:41:36.571147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.715 [2024-07-22 18:41:36.571206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:60912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.715 [2024-07-22 18:41:36.571227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.715 [2024-07-22 18:41:36.571247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:60920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.715 [2024-07-22 18:41:36.571263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.715 [2024-07-22 18:41:36.571279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:60928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.715 [2024-07-22 18:41:36.571293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.715 [2024-07-22 18:41:36.571533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:60936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.715 [2024-07-22 18:41:36.571567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.715 [2024-07-22 18:41:36.571589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:60944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.715 [2024-07-22 18:41:36.571605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.715 [2024-07-22 18:41:36.571621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:60952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.715 [2024-07-22 18:41:36.571635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.715 [2024-07-22 18:41:36.571651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:60960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.715 [2024-07-22 18:41:36.571666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.715 [2024-07-22 18:41:36.571682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:60968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.715 [2024-07-22 18:41:36.571819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.715 [2024-07-22 18:41:36.572141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:60976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.715 [2024-07-22 18:41:36.572174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.715 [2024-07-22 18:41:36.572196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:60984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.715 [2024-07-22 18:41:36.572211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.715 [2024-07-22 18:41:36.572228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:60992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.715 [2024-07-22 18:41:36.572241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.715 [2024-07-22 18:41:36.572258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:61000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.715 [2024-07-22 18:41:36.572279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.715 [2024-07-22 18:41:36.572540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:61008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.715 [2024-07-22 18:41:36.572569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.715 [2024-07-22 18:41:36.572588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:61016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.716 [2024-07-22 18:41:36.572602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.716 [2024-07-22 18:41:36.572619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:61024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.716 [2024-07-22 18:41:36.572634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.716 [2024-07-22 18:41:36.572651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:61032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.716 [2024-07-22 18:41:36.572665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.716 [2024-07-22 18:41:36.572810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:61040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.716 [2024-07-22 18:41:36.573099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.716 [2024-07-22 18:41:36.573125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:61048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.716 [2024-07-22 18:41:36.573140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.716 [2024-07-22 18:41:36.573157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:61056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.716 [2024-07-22 18:41:36.573171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.716 [2024-07-22 18:41:36.573187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:61064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.716 [2024-07-22 18:41:36.573451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.716 [2024-07-22 18:41:36.573484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:61072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.716 [2024-07-22 18:41:36.573501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.716 [2024-07-22 18:41:36.573517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:61080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.716 [2024-07-22 18:41:36.573532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.716 [2024-07-22 18:41:36.573548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:61088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.716 [2024-07-22 18:41:36.573563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.716 [2024-07-22 18:41:36.573579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:61096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.716 [2024-07-22 18:41:36.573877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.716 [2024-07-22 18:41:36.573897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:61104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.716 [2024-07-22 18:41:36.573912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.716 [2024-07-22 18:41:36.573936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:61112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.716 [2024-07-22 18:41:36.573950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.716 [2024-07-22 18:41:36.573967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:61120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.716 [2024-07-22 18:41:36.573980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.716 [2024-07-22 18:41:36.574143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:61128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.716 [2024-07-22 18:41:36.574279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.716 [2024-07-22 18:41:36.574385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:61136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.716 [2024-07-22 18:41:36.574409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.716 [2024-07-22 18:41:36.574429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:61144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.716 [2024-07-22 18:41:36.574718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.716 [2024-07-22 18:41:36.574745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:61152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.716 [2024-07-22 18:41:36.574760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.716 [2024-07-22 18:41:36.574776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:61160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.716 [2024-07-22 18:41:36.574791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.716 [2024-07-22 18:41:36.575077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:61168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.716 [2024-07-22 18:41:36.575108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.716 [2024-07-22 18:41:36.575128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:61176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.716 [2024-07-22 18:41:36.575142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.716 [2024-07-22 18:41:36.575159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:61184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.716 [2024-07-22 18:41:36.575173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.716 [2024-07-22 18:41:36.575190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:61192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.716 [2024-07-22 18:41:36.575204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.716 [2024-07-22 18:41:36.575464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:61200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.716 [2024-07-22 18:41:36.575492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.716 [2024-07-22 18:41:36.575512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:61208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.716 [2024-07-22 18:41:36.575526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.716 [2024-07-22 18:41:36.575544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:61216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.716 [2024-07-22 18:41:36.575558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.716 [2024-07-22 18:41:36.575574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:61224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.716 [2024-07-22 18:41:36.575588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.716 [2024-07-22 18:41:36.575848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:61232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.716 [2024-07-22 18:41:36.575994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.716 [2024-07-22 18:41:36.576088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:61240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.716 [2024-07-22 18:41:36.576106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.716 [2024-07-22 18:41:36.576124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:61752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.716 [2024-07-22 18:41:36.576139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.716 [2024-07-22 18:41:36.576299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:61760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.716 [2024-07-22 18:41:36.576519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.716 [2024-07-22 18:41:36.576553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:61768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.716 [2024-07-22 18:41:36.576569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.716 [2024-07-22 18:41:36.576587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:61776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.716 [2024-07-22 18:41:36.576604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.716 [2024-07-22 18:41:36.576625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:61784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.716 [2024-07-22 18:41:36.576641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.716 [2024-07-22 18:41:36.576773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:61792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.716 [2024-07-22 18:41:36.577022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.716 [2024-07-22 18:41:36.577064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:61800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.716 [2024-07-22 18:41:36.577082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.716 [2024-07-22 18:41:36.577099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:61248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.716 [2024-07-22 18:41:36.577113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.716 [2024-07-22 18:41:36.577235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:61256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.716 [2024-07-22 18:41:36.577254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.716 [2024-07-22 18:41:36.577272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:61264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.716 [2024-07-22 18:41:36.577416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.716 [2024-07-22 18:41:36.577682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:61272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.717 [2024-07-22 18:41:36.577816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.717 [2024-07-22 18:41:36.578102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:61280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.717 [2024-07-22 18:41:36.578134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.717 [2024-07-22 18:41:36.578155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:61288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.717 [2024-07-22 18:41:36.578170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.717 [2024-07-22 18:41:36.578187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:61296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.717 [2024-07-22 18:41:36.578200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.717 [2024-07-22 18:41:36.578218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:61808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.717 [2024-07-22 18:41:36.578232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.717 [2024-07-22 18:41:36.578458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:61816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.717 [2024-07-22 18:41:36.578486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.717 [2024-07-22 18:41:36.578505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:61824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.717 [2024-07-22 18:41:36.578521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.717 [2024-07-22 18:41:36.578539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:61832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.717 [2024-07-22 18:41:36.578554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.717 [2024-07-22 18:41:36.578570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:61840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.717 [2024-07-22 18:41:36.578584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.717 [2024-07-22 18:41:36.578813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:61848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.717 [2024-07-22 18:41:36.578845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.717 [2024-07-22 18:41:36.578866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:61856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.717 [2024-07-22 18:41:36.578882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.717 [2024-07-22 18:41:36.578899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:61864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.717 [2024-07-22 18:41:36.578913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.717 [2024-07-22 18:41:36.579186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:61872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.717 [2024-07-22 18:41:36.579294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.717 [2024-07-22 18:41:36.579316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:61880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.717 [2024-07-22 18:41:36.579330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.717 [2024-07-22 18:41:36.579454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:61888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.717 [2024-07-22 18:41:36.579471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.717 [2024-07-22 18:41:36.579758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:61896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.717 [2024-07-22 18:41:36.579867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.717 [2024-07-22 18:41:36.579889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:61904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.717 [2024-07-22 18:41:36.579904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.717 [2024-07-22 18:41:36.579921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:61912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.717 [2024-07-22 18:41:36.579934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.717 [2024-07-22 18:41:36.579951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:61920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.717 [2024-07-22 18:41:36.580211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.717 [2024-07-22 18:41:36.580235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:61304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.717 [2024-07-22 18:41:36.580250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.717 [2024-07-22 18:41:36.580266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:61312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.717 [2024-07-22 18:41:36.580499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.717 [2024-07-22 18:41:36.580522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:61320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.717 [2024-07-22 18:41:36.580538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.717 [2024-07-22 18:41:36.580668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:61328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.717 [2024-07-22 18:41:36.580692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.717 [2024-07-22 18:41:36.580943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:61336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.717 [2024-07-22 18:41:36.580975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.717 [2024-07-22 18:41:36.580994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:61344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.717 [2024-07-22 18:41:36.581010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.717 [2024-07-22 18:41:36.581026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:61352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.717 [2024-07-22 18:41:36.581041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.717 [2024-07-22 18:41:36.581281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:61360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.717 [2024-07-22 18:41:36.581301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.717 [2024-07-22 18:41:36.581319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:61368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.717 [2024-07-22 18:41:36.581592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.717 [2024-07-22 18:41:36.581614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:61376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.717 [2024-07-22 18:41:36.581630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.717 [2024-07-22 18:41:36.581647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:61384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.717 [2024-07-22 18:41:36.581894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.717 [2024-07-22 18:41:36.581926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:61392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.717 [2024-07-22 18:41:36.581943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.717 [2024-07-22 18:41:36.581960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:61400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.717 [2024-07-22 18:41:36.581975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.717 [2024-07-22 18:41:36.581993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:61408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.717 [2024-07-22 18:41:36.582016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.717 [2024-07-22 18:41:36.582034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:61416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.717 [2024-07-22 18:41:36.582048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.717 [2024-07-22 18:41:36.582384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:61424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.717 [2024-07-22 18:41:36.582425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.717 [2024-07-22 18:41:36.582445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:61432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.717 [2024-07-22 18:41:36.582460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.717 [2024-07-22 18:41:36.582477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:61440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.717 [2024-07-22 18:41:36.582492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.717 [2024-07-22 18:41:36.582509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:61448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.717 [2024-07-22 18:41:36.582524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.717 [2024-07-22 18:41:36.582774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:61456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.717 [2024-07-22 18:41:36.582907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.717 [2024-07-22 18:41:36.583068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:61464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.718 [2024-07-22 18:41:36.583185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.718 [2024-07-22 18:41:36.583208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:61472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.718 [2024-07-22 18:41:36.583223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.718 [2024-07-22 18:41:36.583240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:61480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.718 [2024-07-22 18:41:36.583254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.718 [2024-07-22 18:41:36.583507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:61488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.718 [2024-07-22 18:41:36.583633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.718 [2024-07-22 18:41:36.583665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:61496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.718 [2024-07-22 18:41:36.583944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.718 [2024-07-22 18:41:36.584041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:61504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.718 [2024-07-22 18:41:36.584059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.718 [2024-07-22 18:41:36.584077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:61512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.718 [2024-07-22 18:41:36.584091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.718 [2024-07-22 18:41:36.584327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:61520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.718 [2024-07-22 18:41:36.584354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.718 [2024-07-22 18:41:36.584374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:61528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.718 [2024-07-22 18:41:36.584389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.718 [2024-07-22 18:41:36.584406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:61536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.718 [2024-07-22 18:41:36.584519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.718 [2024-07-22 18:41:36.584661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:61544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.718 [2024-07-22 18:41:36.584793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.718 [2024-07-22 18:41:36.584824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:61552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.718 [2024-07-22 18:41:36.584948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.718 [2024-07-22 18:41:36.585060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:61560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.718 [2024-07-22 18:41:36.585086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.718 [2024-07-22 18:41:36.585225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:61568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.718 [2024-07-22 18:41:36.585332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.718 [2024-07-22 18:41:36.585357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:61576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.718 [2024-07-22 18:41:36.585373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.718 [2024-07-22 18:41:36.585493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:61584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.718 [2024-07-22 18:41:36.585517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.718 [2024-07-22 18:41:36.585651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:61592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.718 [2024-07-22 18:41:36.585793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.718 [2024-07-22 18:41:36.585923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:61600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.718 [2024-07-22 18:41:36.585942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.718 [2024-07-22 18:41:36.585960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:61608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.718 [2024-07-22 18:41:36.585978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.718 [2024-07-22 18:41:36.585995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:61616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.718 [2024-07-22 18:41:36.586029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.718 [2024-07-22 18:41:36.586047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:61624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.718 [2024-07-22 18:41:36.586276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.718 [2024-07-22 18:41:36.586329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:61632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.718 [2024-07-22 18:41:36.586346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.718 [2024-07-22 18:41:36.586364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:61640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.718 [2024-07-22 18:41:36.586378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.718 [2024-07-22 18:41:36.586394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:61648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.718 [2024-07-22 18:41:36.586409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.718 [2024-07-22 18:41:36.586426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:61656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.718 [2024-07-22 18:41:36.586440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.718 [2024-07-22 18:41:36.586781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:61664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.718 [2024-07-22 18:41:36.586812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.718 [2024-07-22 18:41:36.586849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:61672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.718 [2024-07-22 18:41:36.586868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.718 [2024-07-22 18:41:36.586886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:61680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.718 [2024-07-22 18:41:36.586900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.718 [2024-07-22 18:41:36.586916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:61688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.718 [2024-07-22 18:41:36.586936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.718 [2024-07-22 18:41:36.587191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:61696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.718 [2024-07-22 18:41:36.587219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.718 [2024-07-22 18:41:36.587238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:61704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.718 [2024-07-22 18:41:36.587252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.718 [2024-07-22 18:41:36.587269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:61712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.718 [2024-07-22 18:41:36.587283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.718 [2024-07-22 18:41:36.587428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:61720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.718 [2024-07-22 18:41:36.587527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.718 [2024-07-22 18:41:36.587548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:61728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.718 [2024-07-22 18:41:36.587562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.718 [2024-07-22 18:41:36.587580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:61736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.718 [2024-07-22 18:41:36.587593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.718 [2024-07-22 18:41:36.587743] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:24.718 [2024-07-22 18:41:36.587969] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:24.718 [2024-07-22 18:41:36.587987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:61744 len:8 PRP1 0x0 PRP2 0x0 00:34:24.718 [2024-07-22 18:41:36.588003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.718 [2024-07-22 18:41:36.588512] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x61500002b000 was disconnected and freed. reset controller. 00:34:24.718 [2024-07-22 18:41:36.588754] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:24.718 [2024-07-22 18:41:36.588901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.718 [2024-07-22 18:41:36.589149] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:24.718 [2024-07-22 18:41:36.589168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.718 [2024-07-22 18:41:36.589183] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:24.719 [2024-07-22 18:41:36.589198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.719 [2024-07-22 18:41:36.589213] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:24.719 [2024-07-22 18:41:36.589481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.719 [2024-07-22 18:41:36.589498] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(5) to be set 00:34:24.719 [2024-07-22 18:41:36.590006] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.719 [2024-07-22 18:41:36.590065] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:34:24.719 [2024-07-22 18:41:36.590451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.719 [2024-07-22 18:41:36.590496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.2, port=4420 00:34:24.719 [2024-07-22 18:41:36.590515] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(5) to be set 00:34:24.719 [2024-07-22 18:41:36.590546] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:34:24.719 [2024-07-22 18:41:36.590575] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.719 [2024-07-22 18:41:36.590591] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.719 [2024-07-22 18:41:36.590616] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.719 [2024-07-22 18:41:36.590653] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.719 [2024-07-22 18:41:36.591004] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.719 18:41:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:34:26.641 [2024-07-22 18:41:38.591222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.641 [2024-07-22 18:41:38.591333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.2, port=4420 00:34:26.641 [2024-07-22 18:41:38.591359] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(5) to be set 00:34:26.641 [2024-07-22 18:41:38.591402] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:34:26.641 [2024-07-22 18:41:38.591432] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:26.641 [2024-07-22 18:41:38.591448] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:26.641 [2024-07-22 18:41:38.591465] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:26.641 [2024-07-22 18:41:38.591516] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:26.641 [2024-07-22 18:41:38.591535] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:26.641 18:41:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:34:26.641 18:41:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:34:26.641 18:41:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:34:26.900 18:41:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:34:26.900 18:41:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:34:26.900 18:41:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:34:26.900 18:41:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:34:27.467 18:41:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:34:27.467 18:41:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:34:28.842 [2024-07-22 18:41:40.591781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.842 [2024-07-22 18:41:40.592641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.2, port=4420 00:34:28.842 [2024-07-22 18:41:40.592690] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(5) to be set 00:34:28.842 [2024-07-22 18:41:40.592735] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:34:28.842 [2024-07-22 18:41:40.592788] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:28.842 [2024-07-22 18:41:40.592812] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:28.842 [2024-07-22 18:41:40.592830] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:28.842 [2024-07-22 18:41:40.592899] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:28.842 [2024-07-22 18:41:40.592919] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:30.743 [2024-07-22 18:41:42.593297] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:30.743 [2024-07-22 18:41:42.593401] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:30.743 [2024-07-22 18:41:42.593422] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:30.743 [2024-07-22 18:41:42.593440] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:34:30.743 [2024-07-22 18:41:42.593495] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:31.736 00:34:31.736 Latency(us) 00:34:31.736 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:31.736 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:34:31.736 Verification LBA range: start 0x0 length 0x4000 00:34:31.736 NVMe0n1 : 8.19 929.36 3.63 15.63 0.00 135446.96 3321.48 7046430.72 00:34:31.736 =================================================================================================================== 00:34:31.737 Total : 929.36 3.63 15.63 0.00 135446.96 3321.48 7046430.72 00:34:31.737 0 00:34:32.303 18:41:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:34:32.303 18:41:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:34:32.303 18:41:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:34:32.562 18:41:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:34:32.562 18:41:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:34:32.562 18:41:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:34:32.562 18:41:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:34:32.892 18:41:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:34:32.892 18:41:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@65 -- # wait 107692 00:34:32.892 18:41:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 107644 00:34:32.892 18:41:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 107644 ']' 00:34:32.892 18:41:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 107644 00:34:32.892 18:41:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:34:32.892 18:41:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:32.892 18:41:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 107644 00:34:32.892 killing process with pid 107644 00:34:32.892 Received shutdown signal, test time was about 9.358278 seconds 00:34:32.892 00:34:32.892 Latency(us) 00:34:32.892 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:32.892 =================================================================================================================== 00:34:32.892 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:32.892 18:41:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:34:32.893 18:41:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:34:32.893 18:41:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 107644' 00:34:32.893 18:41:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 107644 00:34:32.893 18:41:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 107644 00:34:34.268 18:41:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:34.525 [2024-07-22 18:41:46.307256] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:34.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:34.525 18:41:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=107853 00:34:34.525 18:41:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:34:34.525 18:41:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 107853 /var/tmp/bdevperf.sock 00:34:34.525 18:41:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 107853 ']' 00:34:34.525 18:41:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:34.525 18:41:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:34.525 18:41:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:34.525 18:41:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:34.525 18:41:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:34:34.525 [2024-07-22 18:41:46.439582] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:34:34.525 [2024-07-22 18:41:46.439773] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107853 ] 00:34:34.782 [2024-07-22 18:41:46.613479] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:35.038 [2024-07-22 18:41:46.891099] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:34:35.602 18:41:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:35.602 18:41:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:34:35.602 18:41:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:34:35.860 18:41:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:34:36.118 NVMe0n1 00:34:36.118 18:41:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=107903 00:34:36.118 18:41:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:34:36.118 18:41:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:34:36.376 Running I/O for 10 seconds... 00:34:37.315 18:41:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:37.315 [2024-07-22 18:41:49.304648] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:37.315 [2024-07-22 18:41:49.304730] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:37.315 [2024-07-22 18:41:49.304749] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:37.315 [2024-07-22 18:41:49.304763] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:37.315 [2024-07-22 18:41:49.304777] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:37.315 [2024-07-22 18:41:49.304790] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:37.315 [2024-07-22 18:41:49.304803] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:37.315 [2024-07-22 18:41:49.304818] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:37.315 [2024-07-22 18:41:49.304847] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:37.315 [2024-07-22 18:41:49.304863] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:37.315 [2024-07-22 18:41:49.304876] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:37.315 [2024-07-22 18:41:49.304888] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:37.315 [2024-07-22 18:41:49.304900] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:37.315 [2024-07-22 18:41:49.304913] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:37.315 [2024-07-22 18:41:49.304926] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:37.315 [2024-07-22 18:41:49.304938] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:37.315 [2024-07-22 18:41:49.304950] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:37.315 [2024-07-22 18:41:49.304963] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:37.315 [2024-07-22 18:41:49.304976] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:37.315 [2024-07-22 18:41:49.304988] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:37.315 [2024-07-22 18:41:49.305001] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:37.315 [2024-07-22 18:41:49.305022] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:37.315 [2024-07-22 18:41:49.305034] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:37.315 [2024-07-22 18:41:49.305047] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:37.315 [2024-07-22 18:41:49.305060] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:37.315 [2024-07-22 18:41:49.305072] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:37.315 [2024-07-22 18:41:49.305084] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:37.315 [2024-07-22 18:41:49.305096] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:37.315 [2024-07-22 18:41:49.305117] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:37.315 [2024-07-22 18:41:49.305130] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:37.315 [2024-07-22 18:41:49.305142] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:37.315 [2024-07-22 18:41:49.305165] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:37.315 [2024-07-22 18:41:49.305177] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:37.315 [2024-07-22 18:41:49.305191] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:37.315 [2024-07-22 18:41:49.305203] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:37.315 [2024-07-22 18:41:49.305225] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:37.315 [2024-07-22 18:41:49.305237] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:37.315 [2024-07-22 18:41:49.305249] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:37.315 [2024-07-22 18:41:49.305261] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:37.315 [2024-07-22 18:41:49.305273] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:37.315 [2024-07-22 18:41:49.305285] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:37.315 [2024-07-22 18:41:49.307621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:59456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:37.315 [2024-07-22 18:41:49.307683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.315 [2024-07-22 18:41:49.307746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:59800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:37.315 [2024-07-22 18:41:49.307768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.315 [2024-07-22 18:41:49.307789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:59808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:37.315 [2024-07-22 18:41:49.307803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.315 [2024-07-22 18:41:49.307821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:59816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:37.315 [2024-07-22 18:41:49.308065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.315 [2024-07-22 18:41:49.308102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:59824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:37.315 [2024-07-22 18:41:49.308129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.315 [2024-07-22 18:41:49.308160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:59832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:37.315 [2024-07-22 18:41:49.308469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.315 [2024-07-22 18:41:49.308518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:59840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:37.315 [2024-07-22 18:41:49.308535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.315 [2024-07-22 18:41:49.308553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:59848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:37.315 [2024-07-22 18:41:49.308568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.315 [2024-07-22 18:41:49.308872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:59856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:37.315 [2024-07-22 18:41:49.308905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.315 [2024-07-22 18:41:49.308928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:59864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:37.315 [2024-07-22 18:41:49.308943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.315 [2024-07-22 18:41:49.308961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:59872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:37.315 [2024-07-22 18:41:49.308977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.315 [2024-07-22 18:41:49.309111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:59880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:37.315 [2024-07-22 18:41:49.309373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.315 [2024-07-22 18:41:49.309496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:59888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:37.315 [2024-07-22 18:41:49.309521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.315 [2024-07-22 18:41:49.309793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:59896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:37.315 [2024-07-22 18:41:49.309826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.315 [2024-07-22 18:41:49.309861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:59904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:37.315 [2024-07-22 18:41:49.309878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.315 [2024-07-22 18:41:49.309896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:59912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:37.315 [2024-07-22 18:41:49.310130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.315 [2024-07-22 18:41:49.310155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:59920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:37.315 [2024-07-22 18:41:49.310170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.316 [2024-07-22 18:41:49.310196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:59928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:37.316 [2024-07-22 18:41:49.310210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.316 [2024-07-22 18:41:49.310480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:59936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:37.316 [2024-07-22 18:41:49.310632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.316 [2024-07-22 18:41:49.310979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:59944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:37.316 [2024-07-22 18:41:49.311008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.316 [2024-07-22 18:41:49.311029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:59952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:37.316 [2024-07-22 18:41:49.311043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.316 [2024-07-22 18:41:49.311062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:59960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:37.316 [2024-07-22 18:41:49.311211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.316 [2024-07-22 18:41:49.311427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:59968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:37.316 [2024-07-22 18:41:49.311447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.316 [2024-07-22 18:41:49.311465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:59976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:37.316 [2024-07-22 18:41:49.311481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.316 [2024-07-22 18:41:49.311745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:59984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:37.316 [2024-07-22 18:41:49.311782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.316 [2024-07-22 18:41:49.311804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:59992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:37.316 [2024-07-22 18:41:49.311819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.316 [2024-07-22 18:41:49.311851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:60000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:37.316 [2024-07-22 18:41:49.311868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.316 [2024-07-22 18:41:49.311886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:60008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:37.316 [2024-07-22 18:41:49.311900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.316 [2024-07-22 18:41:49.312248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:60016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:37.316 [2024-07-22 18:41:49.312268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.316 [2024-07-22 18:41:49.312287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:60024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:37.316 [2024-07-22 18:41:49.312301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.316 [2024-07-22 18:41:49.312319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:60032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:37.316 [2024-07-22 18:41:49.312742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.316 [2024-07-22 18:41:49.312975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:60040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:37.316 [2024-07-22 18:41:49.313004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.316 [2024-07-22 18:41:49.313025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:60048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:37.316 [2024-07-22 18:41:49.313040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.316 [2024-07-22 18:41:49.313058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:60056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:37.316 [2024-07-22 18:41:49.313292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.316 [2024-07-22 18:41:49.313329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:60064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:37.316 [2024-07-22 18:41:49.313347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.316 [2024-07-22 18:41:49.313365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:60072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:37.316 [2024-07-22 18:41:49.313379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.316 [2024-07-22 18:41:49.313525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:60080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:37.316 [2024-07-22 18:41:49.313627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.316 [2024-07-22 18:41:49.313650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:60088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:37.316 [2024-07-22 18:41:49.313666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.316 [2024-07-22 18:41:49.313688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:60096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:37.316 [2024-07-22 18:41:49.313923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.316 [2024-07-22 18:41:49.313948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:60104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:37.316 [2024-07-22 18:41:49.313964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.316 [2024-07-22 18:41:49.314215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:60112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:37.316 [2024-07-22 18:41:49.314242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.316 [2024-07-22 18:41:49.314262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:60120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:37.316 [2024-07-22 18:41:49.314276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.316 [2024-07-22 18:41:49.314294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:60128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:37.316 [2024-07-22 18:41:49.314545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.316 [2024-07-22 18:41:49.314582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:60136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:37.316 [2024-07-22 18:41:49.314599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.316 [2024-07-22 18:41:49.314619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:60144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:37.316 [2024-07-22 18:41:49.314880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.316 [2024-07-22 18:41:49.314907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:60152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:37.316 [2024-07-22 18:41:49.314924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.316 [2024-07-22 18:41:49.315155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:60160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:37.316 [2024-07-22 18:41:49.315187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.316 [2024-07-22 18:41:49.315215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:60168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:37.316 [2024-07-22 18:41:49.315233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.316 [2024-07-22 18:41:49.315495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:60176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:37.316 [2024-07-22 18:41:49.315523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.316 [2024-07-22 18:41:49.315780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:60184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:37.316 [2024-07-22 18:41:49.315800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.316 [2024-07-22 18:41:49.315822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:60192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:37.316 [2024-07-22 18:41:49.316061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.316 [2024-07-22 18:41:49.316087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:60200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:37.316 [2024-07-22 18:41:49.316107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.316 [2024-07-22 18:41:49.316393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:60208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:37.316 [2024-07-22 18:41:49.316428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.316 [2024-07-22 18:41:49.316452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:60216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:37.316 [2024-07-22 18:41:49.316471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.316 [2024-07-22 18:41:49.316500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:60224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:37.316 [2024-07-22 18:41:49.316524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.316 [2024-07-22 18:41:49.316544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:60232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:37.317 [2024-07-22 18:41:49.316559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.317 [2024-07-22 18:41:49.316578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:60240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:37.317 [2024-07-22 18:41:49.316598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.317 [2024-07-22 18:41:49.316912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:60248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:37.317 [2024-07-22 18:41:49.316961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.317 [2024-07-22 18:41:49.317006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:60256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:37.317 [2024-07-22 18:41:49.317294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.317 [2024-07-22 18:41:49.317350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:60264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:37.317 [2024-07-22 18:41:49.317380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.317 [2024-07-22 18:41:49.317641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:60272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:37.317 [2024-07-22 18:41:49.317688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.317 [2024-07-22 18:41:49.317725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:60280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:37.317 [2024-07-22 18:41:49.318016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.317 [2024-07-22 18:41:49.318078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:60288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:37.317 [2024-07-22 18:41:49.318110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.317 [2024-07-22 18:41:49.318406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:60296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:37.317 [2024-07-22 18:41:49.318457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.317 [2024-07-22 18:41:49.318494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:59464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:37.317 [2024-07-22 18:41:49.318782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.317 [2024-07-22 18:41:49.318856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:59472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:37.317 [2024-07-22 18:41:49.318885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.317 [2024-07-22 18:41:49.319191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:59480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:37.317 [2024-07-22 18:41:49.319222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.317 [2024-07-22 18:41:49.319251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:59488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:37.317 [2024-07-22 18:41:49.319665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.317 [2024-07-22 18:41:49.319727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:59496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:37.317 [2024-07-22 18:41:49.319762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.317 [2024-07-22 18:41:49.319799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:59504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:37.317 [2024-07-22 18:41:49.319828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.317 [2024-07-22 18:41:49.320201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:59512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:37.317 [2024-07-22 18:41:49.320233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.317 [2024-07-22 18:41:49.320538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:60304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:37.317 [2024-07-22 18:41:49.320588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.317 [2024-07-22 18:41:49.320626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:60312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:37.317 [2024-07-22 18:41:49.320765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.317 [2024-07-22 18:41:49.321085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:60320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:37.317 [2024-07-22 18:41:49.321118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.317 [2024-07-22 18:41:49.321433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:60328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:37.317 [2024-07-22 18:41:49.321484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.317 [2024-07-22 18:41:49.321519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:60336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:37.317 [2024-07-22 18:41:49.321545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.317 [2024-07-22 18:41:49.321871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:60344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:37.317 [2024-07-22 18:41:49.321926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.317 [2024-07-22 18:41:49.321961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:60352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:37.317 [2024-07-22 18:41:49.322294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.317 [2024-07-22 18:41:49.322352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:60360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:37.317 [2024-07-22 18:41:49.322381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.317 [2024-07-22 18:41:49.322688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:60368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:37.317 [2024-07-22 18:41:49.322739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.317 [2024-07-22 18:41:49.322778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:60376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:37.317 [2024-07-22 18:41:49.322928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.317 [2024-07-22 18:41:49.323140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:60384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:37.317 [2024-07-22 18:41:49.323171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.317 [2024-07-22 18:41:49.323202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:60392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:37.317 [2024-07-22 18:41:49.323533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.317 [2024-07-22 18:41:49.323578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:60400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:37.317 [2024-07-22 18:41:49.323597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.317 [2024-07-22 18:41:49.323616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:60408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:37.317 [2024-07-22 18:41:49.323632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.317 [2024-07-22 18:41:49.323910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:59520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:37.317 [2024-07-22 18:41:49.323945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.317 [2024-07-22 18:41:49.323973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:59528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:37.317 [2024-07-22 18:41:49.323989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.317 [2024-07-22 18:41:49.324008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:60416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:37.317 [2024-07-22 18:41:49.324023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.317 [2024-07-22 18:41:49.324042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:60424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:37.317 [2024-07-22 18:41:49.324056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.317 [2024-07-22 18:41:49.324310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:60432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:37.317 [2024-07-22 18:41:49.324344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.317 [2024-07-22 18:41:49.324365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:60440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:37.317 [2024-07-22 18:41:49.324379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.317 [2024-07-22 18:41:49.324398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:60448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:37.317 [2024-07-22 18:41:49.324412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.317 [2024-07-22 18:41:49.324431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:60456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:37.317 [2024-07-22 18:41:49.324581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.317 [2024-07-22 18:41:49.324698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:60464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:37.317 [2024-07-22 18:41:49.324716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.317 [2024-07-22 18:41:49.324862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:60472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:37.317 [2024-07-22 18:41:49.325103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.318 [2024-07-22 18:41:49.325143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:59536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:37.318 [2024-07-22 18:41:49.325169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.318 [2024-07-22 18:41:49.325188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:59544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:37.318 [2024-07-22 18:41:49.325319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.318 [2024-07-22 18:41:49.325561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:59552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:37.318 [2024-07-22 18:41:49.325595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.318 [2024-07-22 18:41:49.325617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:59560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:37.318 [2024-07-22 18:41:49.325632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.318 [2024-07-22 18:41:49.325659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:59568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:37.318 [2024-07-22 18:41:49.325673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.318 [2024-07-22 18:41:49.325825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:59576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:37.318 [2024-07-22 18:41:49.325944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.318 [2024-07-22 18:41:49.325967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:59584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:37.318 [2024-07-22 18:41:49.325982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.318 [2024-07-22 18:41:49.326016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:59592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:37.318 [2024-07-22 18:41:49.326036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.318 [2024-07-22 18:41:49.326055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:59600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:37.318 [2024-07-22 18:41:49.326317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.318 [2024-07-22 18:41:49.326340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:59608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:37.318 [2024-07-22 18:41:49.326356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.318 [2024-07-22 18:41:49.326373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:59616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:37.318 [2024-07-22 18:41:49.326389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.318 [2024-07-22 18:41:49.326525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:59624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:37.318 [2024-07-22 18:41:49.326627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.318 [2024-07-22 18:41:49.326652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:59632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:37.318 [2024-07-22 18:41:49.326667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.318 [2024-07-22 18:41:49.326960] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:37.318 [2024-07-22 18:41:49.327005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59640 len:8 PRP1 0x0 PRP2 0x0 00:34:37.318 [2024-07-22 18:41:49.327025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.318 [2024-07-22 18:41:49.327496] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:37.318 [2024-07-22 18:41:49.327532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.318 [2024-07-22 18:41:49.327564] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:37.318 [2024-07-22 18:41:49.327580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.318 [2024-07-22 18:41:49.327596] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:37.318 [2024-07-22 18:41:49.327804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.318 [2024-07-22 18:41:49.327850] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:37.318 [2024-07-22 18:41:49.327886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.318 [2024-07-22 18:41:49.327902] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(5) to be set 00:34:37.318 [2024-07-22 18:41:49.328456] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:37.318 [2024-07-22 18:41:49.328489] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:37.318 [2024-07-22 18:41:49.328514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59648 len:8 PRP1 0x0 PRP2 0x0 00:34:37.318 [2024-07-22 18:41:49.328532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.318 [2024-07-22 18:41:49.328567] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:37.318 [2024-07-22 18:41:49.328579] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:37.318 [2024-07-22 18:41:49.328855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59656 len:8 PRP1 0x0 PRP2 0x0 00:34:37.318 [2024-07-22 18:41:49.328875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.318 [2024-07-22 18:41:49.328892] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:37.318 [2024-07-22 18:41:49.328904] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:37.318 [2024-07-22 18:41:49.329166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59664 len:8 PRP1 0x0 PRP2 0x0 00:34:37.318 [2024-07-22 18:41:49.329196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.318 [2024-07-22 18:41:49.329218] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:37.318 [2024-07-22 18:41:49.329231] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:37.318 [2024-07-22 18:41:49.329244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59672 len:8 PRP1 0x0 PRP2 0x0 00:34:37.318 [2024-07-22 18:41:49.329258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.318 [2024-07-22 18:41:49.329273] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:37.318 [2024-07-22 18:41:49.329283] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:37.318 [2024-07-22 18:41:49.329381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59680 len:8 PRP1 0x0 PRP2 0x0 00:34:37.318 [2024-07-22 18:41:49.329403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.318 [2024-07-22 18:41:49.329419] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:37.580 [2024-07-22 18:41:49.329431] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:37.580 [2024-07-22 18:41:49.329676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59688 len:8 PRP1 0x0 PRP2 0x0 00:34:37.580 [2024-07-22 18:41:49.329693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.580 [2024-07-22 18:41:49.329710] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:37.580 [2024-07-22 18:41:49.329722] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:37.580 [2024-07-22 18:41:49.329736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59696 len:8 PRP1 0x0 PRP2 0x0 00:34:37.580 [2024-07-22 18:41:49.329756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.580 [2024-07-22 18:41:49.330069] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:37.580 [2024-07-22 18:41:49.330086] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:37.580 [2024-07-22 18:41:49.330111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59704 len:8 PRP1 0x0 PRP2 0x0 00:34:37.580 [2024-07-22 18:41:49.330136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.580 18:41:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:34:37.580 [2024-07-22 18:41:49.330156] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:37.580 [2024-07-22 18:41:49.330396] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:37.580 [2024-07-22 18:41:49.330411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59712 len:8 PRP1 0x0 PRP2 0x0 00:34:37.580 [2024-07-22 18:41:49.330425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.580 [2024-07-22 18:41:49.330441] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:37.580 [2024-07-22 18:41:49.330453] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:37.580 [2024-07-22 18:41:49.330465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59720 len:8 PRP1 0x0 PRP2 0x0 00:34:37.580 [2024-07-22 18:41:49.330479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.580 [2024-07-22 18:41:49.330493] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:37.580 [2024-07-22 18:41:49.330504] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:37.580 [2024-07-22 18:41:49.330517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59728 len:8 PRP1 0x0 PRP2 0x0 00:34:37.580 [2024-07-22 18:41:49.330530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.580 [2024-07-22 18:41:49.330544] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:37.580 [2024-07-22 18:41:49.330555] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:37.580 [2024-07-22 18:41:49.330569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59736 len:8 PRP1 0x0 PRP2 0x0 00:34:37.580 [2024-07-22 18:41:49.330583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.580 [2024-07-22 18:41:49.330597] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:37.580 [2024-07-22 18:41:49.330608] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:37.580 [2024-07-22 18:41:49.330621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59744 len:8 PRP1 0x0 PRP2 0x0 00:34:37.580 [2024-07-22 18:41:49.330634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.580 [2024-07-22 18:41:49.330648] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:37.580 [2024-07-22 18:41:49.330660] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:37.580 [2024-07-22 18:41:49.330672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59752 len:8 PRP1 0x0 PRP2 0x0 00:34:37.580 [2024-07-22 18:41:49.330686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.580 [2024-07-22 18:41:49.330700] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:37.580 [2024-07-22 18:41:49.330712] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:37.580 [2024-07-22 18:41:49.330725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59760 len:8 PRP1 0x0 PRP2 0x0 00:34:37.580 [2024-07-22 18:41:49.330739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.581 [2024-07-22 18:41:49.330753] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:37.581 [2024-07-22 18:41:49.330764] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:37.581 [2024-07-22 18:41:49.330777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59768 len:8 PRP1 0x0 PRP2 0x0 00:34:37.581 [2024-07-22 18:41:49.330793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.581 [2024-07-22 18:41:49.330808] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:37.581 [2024-07-22 18:41:49.330819] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:37.581 [2024-07-22 18:41:49.330845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59776 len:8 PRP1 0x0 PRP2 0x0 00:34:37.581 [2024-07-22 18:41:49.330865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.581 [2024-07-22 18:41:49.330888] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:37.581 [2024-07-22 18:41:49.330899] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:37.581 [2024-07-22 18:41:49.330913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59784 len:8 PRP1 0x0 PRP2 0x0 00:34:37.581 [2024-07-22 18:41:49.330926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.581 [2024-07-22 18:41:49.330940] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:37.581 [2024-07-22 18:41:49.330951] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:37.581 [2024-07-22 18:41:49.330963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59792 len:8 PRP1 0x0 PRP2 0x0 00:34:37.581 [2024-07-22 18:41:49.330977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.581 [2024-07-22 18:41:49.331229] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:37.581 [2024-07-22 18:41:49.331246] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:37.581 [2024-07-22 18:41:49.331261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59456 len:8 PRP1 0x0 PRP2 0x0 00:34:37.581 [2024-07-22 18:41:49.331276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.581 [2024-07-22 18:41:49.331290] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:37.581 [2024-07-22 18:41:49.331301] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:37.581 [2024-07-22 18:41:49.331313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:59800 len:8 PRP1 0x0 PRP2 0x0 00:34:37.581 [2024-07-22 18:41:49.331425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.581 [2024-07-22 18:41:49.331450] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:37.581 [2024-07-22 18:41:49.331589] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:37.581 [2024-07-22 18:41:49.331697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:59808 len:8 PRP1 0x0 PRP2 0x0 00:34:37.581 [2024-07-22 18:41:49.331723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.581 [2024-07-22 18:41:49.331740] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:37.581 [2024-07-22 18:41:49.331752] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:37.581 [2024-07-22 18:41:49.331765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:59816 len:8 PRP1 0x0 PRP2 0x0 00:34:37.581 [2024-07-22 18:41:49.332066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.581 [2024-07-22 18:41:49.332089] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:37.581 [2024-07-22 18:41:49.332103] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:37.581 [2024-07-22 18:41:49.332125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:59824 len:8 PRP1 0x0 PRP2 0x0 00:34:37.581 [2024-07-22 18:41:49.332241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.581 [2024-07-22 18:41:49.332266] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:37.581 [2024-07-22 18:41:49.332524] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:37.581 [2024-07-22 18:41:49.332556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:59832 len:8 PRP1 0x0 PRP2 0x0 00:34:37.581 [2024-07-22 18:41:49.332574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.581 [2024-07-22 18:41:49.332590] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:37.581 [2024-07-22 18:41:49.332602] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:37.581 [2024-07-22 18:41:49.332615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:59840 len:8 PRP1 0x0 PRP2 0x0 00:34:37.581 [2024-07-22 18:41:49.332629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.581 [2024-07-22 18:41:49.332736] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:37.581 [2024-07-22 18:41:49.332758] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:37.581 [2024-07-22 18:41:49.332772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:59848 len:8 PRP1 0x0 PRP2 0x0 00:34:37.581 [2024-07-22 18:41:49.333062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.581 [2024-07-22 18:41:49.333319] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:37.581 [2024-07-22 18:41:49.333336] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:37.581 [2024-07-22 18:41:49.333351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:59856 len:8 PRP1 0x0 PRP2 0x0 00:34:37.581 [2024-07-22 18:41:49.333366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.581 [2024-07-22 18:41:49.333379] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:37.581 [2024-07-22 18:41:49.333391] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:37.581 [2024-07-22 18:41:49.333528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:59864 len:8 PRP1 0x0 PRP2 0x0 00:34:37.581 [2024-07-22 18:41:49.333672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.581 [2024-07-22 18:41:49.333824] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:37.581 [2024-07-22 18:41:49.334081] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:37.581 [2024-07-22 18:41:49.334105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:59872 len:8 PRP1 0x0 PRP2 0x0 00:34:37.581 [2024-07-22 18:41:49.334250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.581 [2024-07-22 18:41:49.334531] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:37.581 [2024-07-22 18:41:49.334678] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:37.581 [2024-07-22 18:41:49.334782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:59880 len:8 PRP1 0x0 PRP2 0x0 00:34:37.581 [2024-07-22 18:41:49.334811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.581 [2024-07-22 18:41:49.335091] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:37.581 [2024-07-22 18:41:49.335140] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:37.581 [2024-07-22 18:41:49.335190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:59888 len:8 PRP1 0x0 PRP2 0x0 00:34:37.581 [2024-07-22 18:41:49.335483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.581 [2024-07-22 18:41:49.335518] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:37.581 [2024-07-22 18:41:49.335823] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:37.581 [2024-07-22 18:41:49.335874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:59896 len:8 PRP1 0x0 PRP2 0x0 00:34:37.581 [2024-07-22 18:41:49.336170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.581 [2024-07-22 18:41:49.336200] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:37.581 [2024-07-22 18:41:49.336220] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:37.581 [2024-07-22 18:41:49.336501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:59904 len:8 PRP1 0x0 PRP2 0x0 00:34:37.581 [2024-07-22 18:41:49.336531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.581 [2024-07-22 18:41:49.336784] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:37.581 [2024-07-22 18:41:49.336829] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:37.581 [2024-07-22 18:41:49.336874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:59912 len:8 PRP1 0x0 PRP2 0x0 00:34:37.581 [2024-07-22 18:41:49.337163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.581 [2024-07-22 18:41:49.337216] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:37.581 [2024-07-22 18:41:49.337240] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:37.581 [2024-07-22 18:41:49.337265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:59920 len:8 PRP1 0x0 PRP2 0x0 00:34:37.581 [2024-07-22 18:41:49.337548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.581 [2024-07-22 18:41:49.337577] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:37.581 [2024-07-22 18:41:49.337848] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:37.581 [2024-07-22 18:41:49.337906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:59928 len:8 PRP1 0x0 PRP2 0x0 00:34:37.581 [2024-07-22 18:41:49.337933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.581 [2024-07-22 18:41:49.338228] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:37.581 [2024-07-22 18:41:49.338273] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:37.581 [2024-07-22 18:41:49.338298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:59936 len:8 PRP1 0x0 PRP2 0x0 00:34:37.581 [2024-07-22 18:41:49.338323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.581 [2024-07-22 18:41:49.338620] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:37.581 [2024-07-22 18:41:49.338645] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:37.582 [2024-07-22 18:41:49.338667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:59944 len:8 PRP1 0x0 PRP2 0x0 00:34:37.582 [2024-07-22 18:41:49.338995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.582 [2024-07-22 18:41:49.339281] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:37.582 [2024-07-22 18:41:49.339323] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:37.582 [2024-07-22 18:41:49.339346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:59952 len:8 PRP1 0x0 PRP2 0x0 00:34:37.582 [2024-07-22 18:41:49.339368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.582 [2024-07-22 18:41:49.339656] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:37.582 [2024-07-22 18:41:49.339684] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:37.582 [2024-07-22 18:41:49.340047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:59960 len:8 PRP1 0x0 PRP2 0x0 00:34:37.582 [2024-07-22 18:41:49.340355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.582 [2024-07-22 18:41:49.340415] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:37.582 [2024-07-22 18:41:49.340439] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:37.582 [2024-07-22 18:41:49.340726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:59968 len:8 PRP1 0x0 PRP2 0x0 00:34:37.582 [2024-07-22 18:41:49.340782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.582 [2024-07-22 18:41:49.340813] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:37.582 [2024-07-22 18:41:49.341116] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:37.582 [2024-07-22 18:41:49.341148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:59976 len:8 PRP1 0x0 PRP2 0x0 00:34:37.582 [2024-07-22 18:41:49.341436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.582 [2024-07-22 18:41:49.341481] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:37.582 [2024-07-22 18:41:49.341503] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:37.582 [2024-07-22 18:41:49.341526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:59984 len:8 PRP1 0x0 PRP2 0x0 00:34:37.582 [2024-07-22 18:41:49.341809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.582 [2024-07-22 18:41:49.341986] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:37.582 [2024-07-22 18:41:49.342164] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:37.582 [2024-07-22 18:41:49.342194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:59992 len:8 PRP1 0x0 PRP2 0x0 00:34:37.582 [2024-07-22 18:41:49.342466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.582 [2024-07-22 18:41:49.342519] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:37.582 [2024-07-22 18:41:49.342543] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:37.582 [2024-07-22 18:41:49.342827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60000 len:8 PRP1 0x0 PRP2 0x0 00:34:37.582 [2024-07-22 18:41:49.342890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.582 [2024-07-22 18:41:49.342919] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:37.582 [2024-07-22 18:41:49.343208] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:37.582 [2024-07-22 18:41:49.343251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60008 len:8 PRP1 0x0 PRP2 0x0 00:34:37.582 [2024-07-22 18:41:49.343277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.582 [2024-07-22 18:41:49.343304] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:37.582 [2024-07-22 18:41:49.343717] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:37.582 [2024-07-22 18:41:49.343745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60016 len:8 PRP1 0x0 PRP2 0x0 00:34:37.582 [2024-07-22 18:41:49.343773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.582 [2024-07-22 18:41:49.343915] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:37.582 [2024-07-22 18:41:49.344100] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:37.582 [2024-07-22 18:41:49.344130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60024 len:8 PRP1 0x0 PRP2 0x0 00:34:37.582 [2024-07-22 18:41:49.344410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.582 [2024-07-22 18:41:49.344459] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:37.582 [2024-07-22 18:41:49.344482] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:37.582 [2024-07-22 18:41:49.344504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60032 len:8 PRP1 0x0 PRP2 0x0 00:34:37.582 [2024-07-22 18:41:49.344820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.582 [2024-07-22 18:41:49.344877] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:37.582 [2024-07-22 18:41:49.345157] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:37.582 [2024-07-22 18:41:49.345207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60040 len:8 PRP1 0x0 PRP2 0x0 00:34:37.582 [2024-07-22 18:41:49.345234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.582 [2024-07-22 18:41:49.345510] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:37.582 [2024-07-22 18:41:49.345555] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:37.582 [2024-07-22 18:41:49.345582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60048 len:8 PRP1 0x0 PRP2 0x0 00:34:37.582 [2024-07-22 18:41:49.345888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.582 [2024-07-22 18:41:49.345947] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:37.582 [2024-07-22 18:41:49.345972] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:37.582 [2024-07-22 18:41:49.346276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60056 len:8 PRP1 0x0 PRP2 0x0 00:34:37.582 [2024-07-22 18:41:49.346336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.582 [2024-07-22 18:41:49.346371] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:37.582 [2024-07-22 18:41:49.346660] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:37.582 [2024-07-22 18:41:49.346694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60064 len:8 PRP1 0x0 PRP2 0x0 00:34:37.582 [2024-07-22 18:41:49.347016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.582 [2024-07-22 18:41:49.347054] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:37.582 [2024-07-22 18:41:49.347348] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:37.582 [2024-07-22 18:41:49.347387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60072 len:8 PRP1 0x0 PRP2 0x0 00:34:37.582 [2024-07-22 18:41:49.347641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.582 [2024-07-22 18:41:49.347696] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:37.582 [2024-07-22 18:41:49.347722] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:37.582 [2024-07-22 18:41:49.348016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60080 len:8 PRP1 0x0 PRP2 0x0 00:34:37.582 [2024-07-22 18:41:49.348066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.582 [2024-07-22 18:41:49.348095] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:37.582 [2024-07-22 18:41:49.348360] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:37.582 [2024-07-22 18:41:49.348414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60088 len:8 PRP1 0x0 PRP2 0x0 00:34:37.582 [2024-07-22 18:41:49.348445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.582 [2024-07-22 18:41:49.348756] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:37.582 [2024-07-22 18:41:49.348793] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:37.582 [2024-07-22 18:41:49.348816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60096 len:8 PRP1 0x0 PRP2 0x0 00:34:37.582 [2024-07-22 18:41:49.349223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.582 [2024-07-22 18:41:49.349287] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:37.582 [2024-07-22 18:41:49.349314] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:37.582 [2024-07-22 18:41:49.349701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60104 len:8 PRP1 0x0 PRP2 0x0 00:34:37.582 [2024-07-22 18:41:49.349735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.582 [2024-07-22 18:41:49.349764] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:37.582 [2024-07-22 18:41:49.350217] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:37.582 [2024-07-22 18:41:49.350249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60112 len:8 PRP1 0x0 PRP2 0x0 00:34:37.582 [2024-07-22 18:41:49.350548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.582 [2024-07-22 18:41:49.350598] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:37.582 [2024-07-22 18:41:49.350622] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:37.582 [2024-07-22 18:41:49.350929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60120 len:8 PRP1 0x0 PRP2 0x0 00:34:37.582 [2024-07-22 18:41:49.350989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.582 [2024-07-22 18:41:49.351021] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:37.582 [2024-07-22 18:41:49.351043] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:37.582 [2024-07-22 18:41:49.351066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60128 len:8 PRP1 0x0 PRP2 0x0 00:34:37.583 [2024-07-22 18:41:49.351509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.583 [2024-07-22 18:41:49.351547] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:37.583 [2024-07-22 18:41:49.351571] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:37.583 [2024-07-22 18:41:49.351595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60136 len:8 PRP1 0x0 PRP2 0x0 00:34:37.583 [2024-07-22 18:41:49.352056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.583 [2024-07-22 18:41:49.352091] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:37.583 [2024-07-22 18:41:49.352381] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:37.583 [2024-07-22 18:41:49.352721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60144 len:8 PRP1 0x0 PRP2 0x0 00:34:37.583 [2024-07-22 18:41:49.352767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.583 [2024-07-22 18:41:49.352800] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:37.583 [2024-07-22 18:41:49.353080] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:37.583 [2024-07-22 18:41:49.353121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60152 len:8 PRP1 0x0 PRP2 0x0 00:34:37.583 [2024-07-22 18:41:49.353152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.583 [2024-07-22 18:41:49.353477] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:37.583 [2024-07-22 18:41:49.353528] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:37.583 [2024-07-22 18:41:49.353554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60160 len:8 PRP1 0x0 PRP2 0x0 00:34:37.583 [2024-07-22 18:41:49.353862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.583 [2024-07-22 18:41:49.353921] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:37.583 [2024-07-22 18:41:49.353944] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:37.583 [2024-07-22 18:41:49.353968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60168 len:8 PRP1 0x0 PRP2 0x0 00:34:37.583 [2024-07-22 18:41:49.354317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.583 [2024-07-22 18:41:49.354350] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:37.583 [2024-07-22 18:41:49.354658] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:37.583 [2024-07-22 18:41:49.354700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60176 len:8 PRP1 0x0 PRP2 0x0 00:34:37.583 [2024-07-22 18:41:49.354727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.583 [2024-07-22 18:41:49.354754] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:37.583 [2024-07-22 18:41:49.355144] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:37.583 [2024-07-22 18:41:49.355172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60184 len:8 PRP1 0x0 PRP2 0x0 00:34:37.583 [2024-07-22 18:41:49.355195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.583 [2024-07-22 18:41:49.355218] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:37.583 [2024-07-22 18:41:49.355235] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:37.583 [2024-07-22 18:41:49.355254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60192 len:8 PRP1 0x0 PRP2 0x0 00:34:37.583 [2024-07-22 18:41:49.355527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.583 [2024-07-22 18:41:49.355555] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:37.583 [2024-07-22 18:41:49.355572] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:37.583 [2024-07-22 18:41:49.355847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60200 len:8 PRP1 0x0 PRP2 0x0 00:34:37.583 [2024-07-22 18:41:49.355876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.583 [2024-07-22 18:41:49.356121] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:37.583 [2024-07-22 18:41:49.356169] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:37.583 [2024-07-22 18:41:49.356193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60208 len:8 PRP1 0x0 PRP2 0x0 00:34:37.583 [2024-07-22 18:41:49.356215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.583 [2024-07-22 18:41:49.356618] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:37.583 [2024-07-22 18:41:49.356648] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:37.583 [2024-07-22 18:41:49.356943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60216 len:8 PRP1 0x0 PRP2 0x0 00:34:37.583 [2024-07-22 18:41:49.356972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.583 [2024-07-22 18:41:49.357222] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:37.583 [2024-07-22 18:41:49.357266] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:37.583 [2024-07-22 18:41:49.357289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60224 len:8 PRP1 0x0 PRP2 0x0 00:34:37.583 [2024-07-22 18:41:49.357312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.583 [2024-07-22 18:41:49.357677] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:37.583 [2024-07-22 18:41:49.357712] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:37.583 [2024-07-22 18:41:49.357734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60232 len:8 PRP1 0x0 PRP2 0x0 00:34:37.583 [2024-07-22 18:41:49.357756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.583 [2024-07-22 18:41:49.357780] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:37.583 [2024-07-22 18:41:49.357797] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:37.583 [2024-07-22 18:41:49.357816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60240 len:8 PRP1 0x0 PRP2 0x0 00:34:37.583 [2024-07-22 18:41:49.358098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.583 [2024-07-22 18:41:49.358130] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:37.583 [2024-07-22 18:41:49.358150] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:37.583 [2024-07-22 18:41:49.358429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60248 len:8 PRP1 0x0 PRP2 0x0 00:34:37.583 [2024-07-22 18:41:49.358459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.583 [2024-07-22 18:41:49.358705] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:37.583 [2024-07-22 18:41:49.358754] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:37.583 [2024-07-22 18:41:49.358777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60256 len:8 PRP1 0x0 PRP2 0x0 00:34:37.583 [2024-07-22 18:41:49.358801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.583 [2024-07-22 18:41:49.359094] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:37.583 [2024-07-22 18:41:49.359118] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:37.583 [2024-07-22 18:41:49.359140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60264 len:8 PRP1 0x0 PRP2 0x0 00:34:37.583 [2024-07-22 18:41:49.359401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.583 [2024-07-22 18:41:49.359429] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:37.583 [2024-07-22 18:41:49.359447] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:37.583 [2024-07-22 18:41:49.359706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60272 len:8 PRP1 0x0 PRP2 0x0 00:34:37.583 [2024-07-22 18:41:49.359734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.583 [2024-07-22 18:41:49.359986] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:37.583 [2024-07-22 18:41:49.360030] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:37.583 [2024-07-22 18:41:49.360053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60280 len:8 PRP1 0x0 PRP2 0x0 00:34:37.583 [2024-07-22 18:41:49.360075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.583 [2024-07-22 18:41:49.360463] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:37.583 [2024-07-22 18:41:49.360500] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:37.583 [2024-07-22 18:41:49.360522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60288 len:8 PRP1 0x0 PRP2 0x0 00:34:37.583 [2024-07-22 18:41:49.360545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.583 [2024-07-22 18:41:49.360926] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:37.583 [2024-07-22 18:41:49.360967] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:37.583 [2024-07-22 18:41:49.360989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60296 len:8 PRP1 0x0 PRP2 0x0 00:34:37.583 [2024-07-22 18:41:49.361012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.583 [2024-07-22 18:41:49.361514] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:37.583 [2024-07-22 18:41:49.361565] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:37.583 [2024-07-22 18:41:49.361590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59464 len:8 PRP1 0x0 PRP2 0x0 00:34:37.583 [2024-07-22 18:41:49.361617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.583 [2024-07-22 18:41:49.362046] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:37.583 [2024-07-22 18:41:49.362112] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:37.584 [2024-07-22 18:41:49.362143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59472 len:8 PRP1 0x0 PRP2 0x0 00:34:37.584 [2024-07-22 18:41:49.362437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.584 [2024-07-22 18:41:49.362491] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:37.584 [2024-07-22 18:41:49.362513] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:37.584 [2024-07-22 18:41:49.362535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59480 len:8 PRP1 0x0 PRP2 0x0 00:34:37.584 [2024-07-22 18:41:49.362851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.584 [2024-07-22 18:41:49.363139] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:37.584 [2024-07-22 18:41:49.363189] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:37.584 [2024-07-22 18:41:49.363214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59488 len:8 PRP1 0x0 PRP2 0x0 00:34:37.584 [2024-07-22 18:41:49.363238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.584 [2024-07-22 18:41:49.363508] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:37.584 [2024-07-22 18:41:49.363530] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:37.584 [2024-07-22 18:41:49.363552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59496 len:8 PRP1 0x0 PRP2 0x0 00:34:37.584 [2024-07-22 18:41:49.363813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.584 [2024-07-22 18:41:49.363861] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:37.584 [2024-07-22 18:41:49.364163] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:37.584 [2024-07-22 18:41:49.364206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59504 len:8 PRP1 0x0 PRP2 0x0 00:34:37.584 [2024-07-22 18:41:49.364234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.584 [2024-07-22 18:41:49.364263] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:37.584 [2024-07-22 18:41:49.364666] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:37.584 [2024-07-22 18:41:49.364693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59512 len:8 PRP1 0x0 PRP2 0x0 00:34:37.584 [2024-07-22 18:41:49.364718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.584 [2024-07-22 18:41:49.364746] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:37.584 [2024-07-22 18:41:49.365066] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:37.584 [2024-07-22 18:41:49.365096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60304 len:8 PRP1 0x0 PRP2 0x0 00:34:37.584 [2024-07-22 18:41:49.365388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.584 [2024-07-22 18:41:49.365431] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:37.584 [2024-07-22 18:41:49.365451] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:37.584 [2024-07-22 18:41:49.365471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60312 len:8 PRP1 0x0 PRP2 0x0 00:34:37.584 [2024-07-22 18:41:49.365493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.584 [2024-07-22 18:41:49.365627] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:37.584 [2024-07-22 18:41:49.365940] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:37.584 [2024-07-22 18:41:49.366216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60320 len:8 PRP1 0x0 PRP2 0x0 00:34:37.584 [2024-07-22 18:41:49.366266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.584 [2024-07-22 18:41:49.366294] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:37.584 [2024-07-22 18:41:49.366312] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:37.584 [2024-07-22 18:41:49.366573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60328 len:8 PRP1 0x0 PRP2 0x0 00:34:37.584 [2024-07-22 18:41:49.366593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.584 [2024-07-22 18:41:49.366610] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:37.584 [2024-07-22 18:41:49.366622] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:37.584 [2024-07-22 18:41:49.366634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60336 len:8 PRP1 0x0 PRP2 0x0 00:34:37.584 [2024-07-22 18:41:49.366943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.584 [2024-07-22 18:41:49.366963] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:37.584 [2024-07-22 18:41:49.366975] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:37.584 [2024-07-22 18:41:49.367257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60344 len:8 PRP1 0x0 PRP2 0x0 00:34:37.584 [2024-07-22 18:41:49.367282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.584 [2024-07-22 18:41:49.367300] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:37.584 [2024-07-22 18:41:49.367312] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:37.584 [2024-07-22 18:41:49.367324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60352 len:8 PRP1 0x0 PRP2 0x0 00:34:37.584 [2024-07-22 18:41:49.367338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.584 [2024-07-22 18:41:49.367622] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:37.584 [2024-07-22 18:41:49.367639] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:37.584 [2024-07-22 18:41:49.367652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60360 len:8 PRP1 0x0 PRP2 0x0 00:34:37.584 [2024-07-22 18:41:49.367666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.584 [2024-07-22 18:41:49.367680] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:37.584 [2024-07-22 18:41:49.367690] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:37.584 [2024-07-22 18:41:49.367961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60368 len:8 PRP1 0x0 PRP2 0x0 00:34:37.584 [2024-07-22 18:41:49.368065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.584 [2024-07-22 18:41:49.368085] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:37.584 [2024-07-22 18:41:49.368097] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:37.584 [2024-07-22 18:41:49.368235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60376 len:8 PRP1 0x0 PRP2 0x0 00:34:37.584 [2024-07-22 18:41:49.368375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.584 [2024-07-22 18:41:49.368403] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:37.584 [2024-07-22 18:41:49.368638] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:37.584 [2024-07-22 18:41:49.368663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60384 len:8 PRP1 0x0 PRP2 0x0 00:34:37.584 [2024-07-22 18:41:49.368679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.584 [2024-07-22 18:41:49.368823] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:37.584 [2024-07-22 18:41:49.369077] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:37.584 [2024-07-22 18:41:49.369101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60392 len:8 PRP1 0x0 PRP2 0x0 00:34:37.584 [2024-07-22 18:41:49.369125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.584 [2024-07-22 18:41:49.369149] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:37.584 [2024-07-22 18:41:49.369382] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:37.584 [2024-07-22 18:41:49.369405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60400 len:8 PRP1 0x0 PRP2 0x0 00:34:37.584 [2024-07-22 18:41:49.369420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.584 [2024-07-22 18:41:49.369435] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:37.584 [2024-07-22 18:41:49.369446] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:37.584 [2024-07-22 18:41:49.369458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60408 len:8 PRP1 0x0 PRP2 0x0 00:34:37.584 [2024-07-22 18:41:49.369743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.584 [2024-07-22 18:41:49.369765] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:37.584 [2024-07-22 18:41:49.369776] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:37.584 [2024-07-22 18:41:49.369986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59520 len:8 PRP1 0x0 PRP2 0x0 00:34:37.584 [2024-07-22 18:41:49.370022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.584 [2024-07-22 18:41:49.370040] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:37.584 [2024-07-22 18:41:49.370053] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:37.584 [2024-07-22 18:41:49.370066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59528 len:8 PRP1 0x0 PRP2 0x0 00:34:37.584 [2024-07-22 18:41:49.370079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.584 [2024-07-22 18:41:49.370346] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:37.584 [2024-07-22 18:41:49.370589] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:37.584 [2024-07-22 18:41:49.370615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60416 len:8 PRP1 0x0 PRP2 0x0 00:34:37.584 [2024-07-22 18:41:49.370631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.584 [2024-07-22 18:41:49.370890] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:37.584 [2024-07-22 18:41:49.370913] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:37.584 [2024-07-22 18:41:49.370927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60424 len:8 PRP1 0x0 PRP2 0x0 00:34:37.585 [2024-07-22 18:41:49.370942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.585 [2024-07-22 18:41:49.370957] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:37.585 [2024-07-22 18:41:49.370967] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:37.585 [2024-07-22 18:41:49.371246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60432 len:8 PRP1 0x0 PRP2 0x0 00:34:37.585 [2024-07-22 18:41:49.371273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.585 [2024-07-22 18:41:49.371290] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:37.585 [2024-07-22 18:41:49.371302] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:37.585 [2024-07-22 18:41:49.371315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60440 len:8 PRP1 0x0 PRP2 0x0 00:34:37.585 [2024-07-22 18:41:49.371580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.585 [2024-07-22 18:41:49.371613] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:37.585 [2024-07-22 18:41:49.371627] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:37.585 [2024-07-22 18:41:49.371641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60448 len:8 PRP1 0x0 PRP2 0x0 00:34:37.585 [2024-07-22 18:41:49.371655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.585 [2024-07-22 18:41:49.371789] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:37.585 [2024-07-22 18:41:49.371997] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:37.585 [2024-07-22 18:41:49.372014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60456 len:8 PRP1 0x0 PRP2 0x0 00:34:37.585 [2024-07-22 18:41:49.372030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.585 [2024-07-22 18:41:49.372045] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:37.585 [2024-07-22 18:41:49.372056] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:37.585 [2024-07-22 18:41:49.372068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60464 len:8 PRP1 0x0 PRP2 0x0 00:34:37.585 [2024-07-22 18:41:49.372193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.585 [2024-07-22 18:41:49.372219] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:37.585 [2024-07-22 18:41:49.372446] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:37.585 [2024-07-22 18:41:49.372475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60472 len:8 PRP1 0x0 PRP2 0x0 00:34:37.585 [2024-07-22 18:41:49.372490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.585 [2024-07-22 18:41:49.372506] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:37.585 [2024-07-22 18:41:49.372519] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:37.585 [2024-07-22 18:41:49.372531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59536 len:8 PRP1 0x0 PRP2 0x0 00:34:37.585 [2024-07-22 18:41:49.372943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.585 [2024-07-22 18:41:49.373057] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:37.585 [2024-07-22 18:41:49.373073] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:37.585 [2024-07-22 18:41:49.373087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59544 len:8 PRP1 0x0 PRP2 0x0 00:34:37.585 [2024-07-22 18:41:49.373234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.585 [2024-07-22 18:41:49.373372] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:37.585 [2024-07-22 18:41:49.373626] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:37.585 [2024-07-22 18:41:49.373668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59552 len:8 PRP1 0x0 PRP2 0x0 00:34:37.585 [2024-07-22 18:41:49.373684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.585 [2024-07-22 18:41:49.373701] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:37.585 [2024-07-22 18:41:49.373901] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:37.585 [2024-07-22 18:41:49.373926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59560 len:8 PRP1 0x0 PRP2 0x0 00:34:37.585 [2024-07-22 18:41:49.373942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.585 [2024-07-22 18:41:49.373957] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:37.585 [2024-07-22 18:41:49.373970] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:37.585 [2024-07-22 18:41:49.373983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59568 len:8 PRP1 0x0 PRP2 0x0 00:34:37.585 [2024-07-22 18:41:49.374281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.585 [2024-07-22 18:41:49.374303] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:37.585 [2024-07-22 18:41:49.374315] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:37.585 [2024-07-22 18:41:49.374329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59576 len:8 PRP1 0x0 PRP2 0x0 00:34:37.585 [2024-07-22 18:41:49.374587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.585 [2024-07-22 18:41:49.374616] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:37.585 [2024-07-22 18:41:49.374755] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:37.585 [2024-07-22 18:41:49.374884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59584 len:8 PRP1 0x0 PRP2 0x0 00:34:37.585 [2024-07-22 18:41:49.374902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.585 [2024-07-22 18:41:49.375165] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:37.585 [2024-07-22 18:41:49.375185] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:37.585 [2024-07-22 18:41:49.375319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59592 len:8 PRP1 0x0 PRP2 0x0 00:34:37.585 [2024-07-22 18:41:49.375448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.585 [2024-07-22 18:41:49.375474] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:37.585 [2024-07-22 18:41:49.375721] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:37.585 [2024-07-22 18:41:49.375753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59600 len:8 PRP1 0x0 PRP2 0x0 00:34:37.585 [2024-07-22 18:41:49.375768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.585 [2024-07-22 18:41:49.375784] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:37.585 [2024-07-22 18:41:49.375796] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:37.585 [2024-07-22 18:41:49.375809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59608 len:8 PRP1 0x0 PRP2 0x0 00:34:37.585 [2024-07-22 18:41:49.376067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.585 [2024-07-22 18:41:49.376183] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:37.585 [2024-07-22 18:41:49.376199] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:37.585 [2024-07-22 18:41:49.376212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59616 len:8 PRP1 0x0 PRP2 0x0 00:34:37.585 [2024-07-22 18:41:49.376339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.585 [2024-07-22 18:41:49.376484] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:37.586 [2024-07-22 18:41:49.376577] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:37.586 [2024-07-22 18:41:49.376592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59624 len:8 PRP1 0x0 PRP2 0x0 00:34:37.586 [2024-07-22 18:41:49.376606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.586 [2024-07-22 18:41:49.376622] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:37.586 [2024-07-22 18:41:49.376633] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:37.586 [2024-07-22 18:41:49.376770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59632 len:8 PRP1 0x0 PRP2 0x0 00:34:37.586 [2024-07-22 18:41:49.376887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.586 [2024-07-22 18:41:49.376906] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:37.586 [2024-07-22 18:41:49.376919] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:37.586 [2024-07-22 18:41:49.377178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59640 len:8 PRP1 0x0 PRP2 0x0 00:34:37.586 [2024-07-22 18:41:49.377207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.586 [2024-07-22 18:41:49.377728] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x61500002b000 was disconnected and freed. reset controller. 00:34:37.586 [2024-07-22 18:41:49.378078] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:34:37.586 [2024-07-22 18:41:49.378592] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:37.586 [2024-07-22 18:41:49.379162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.586 [2024-07-22 18:41:49.379254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.2, port=4420 00:34:37.586 [2024-07-22 18:41:49.379289] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(5) to be set 00:34:37.586 [2024-07-22 18:41:49.379333] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:34:37.586 [2024-07-22 18:41:49.379362] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:37.586 [2024-07-22 18:41:49.379628] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:37.586 [2024-07-22 18:41:49.379779] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:37.586 [2024-07-22 18:41:49.379936] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:37.586 [2024-07-22 18:41:49.379959] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:38.520 18:41:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:38.520 [2024-07-22 18:41:50.380502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.520 [2024-07-22 18:41:50.380648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.2, port=4420 00:34:38.520 [2024-07-22 18:41:50.380692] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(5) to be set 00:34:38.520 [2024-07-22 18:41:50.380761] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:34:38.520 [2024-07-22 18:41:50.380813] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:38.520 [2024-07-22 18:41:50.380860] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:38.520 [2024-07-22 18:41:50.380891] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:38.520 [2024-07-22 18:41:50.381329] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:38.520 [2024-07-22 18:41:50.381385] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:38.778 [2024-07-22 18:41:50.577762] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:38.778 18:41:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@92 -- # wait 107903 00:34:39.710 [2024-07-22 18:41:51.399417] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:34:46.275 00:34:46.275 Latency(us) 00:34:46.275 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:46.275 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:34:46.275 Verification LBA range: start 0x0 length 0x4000 00:34:46.275 NVMe0n1 : 10.01 4398.63 17.18 0.00 0.00 29059.66 3247.01 3096158.95 00:34:46.275 =================================================================================================================== 00:34:46.275 Total : 4398.63 17.18 0.00 0.00 29059.66 3247.01 3096158.95 00:34:46.275 0 00:34:46.275 18:41:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=108007 00:34:46.275 18:41:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:34:46.275 18:41:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:34:46.558 Running I/O for 10 seconds... 00:34:47.494 18:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:47.494 [2024-07-22 18:41:59.508846] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:47.494 [2024-07-22 18:41:59.508935] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:47.494 [2024-07-22 18:41:59.508951] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:47.494 [2024-07-22 18:41:59.508964] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:47.494 [2024-07-22 18:41:59.508977] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:47.494 [2024-07-22 18:41:59.508989] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:47.495 [2024-07-22 18:41:59.509001] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:47.495 [2024-07-22 18:41:59.509014] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:47.495 [2024-07-22 18:41:59.509026] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:47.495 [2024-07-22 18:41:59.509038] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:47.495 [2024-07-22 18:41:59.509051] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:47.495 [2024-07-22 18:41:59.509062] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:47.495 [2024-07-22 18:41:59.509076] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:47.495 [2024-07-22 18:41:59.509089] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:47.495 [2024-07-22 18:41:59.509101] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:47.495 [2024-07-22 18:41:59.509112] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:47.495 [2024-07-22 18:41:59.510201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:65888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:47.495 [2024-07-22 18:41:59.510301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.495 [2024-07-22 18:41:59.510340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:65896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:47.495 [2024-07-22 18:41:59.510357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.495 [2024-07-22 18:41:59.510377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:65904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:47.495 [2024-07-22 18:41:59.510392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.495 [2024-07-22 18:41:59.510650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:65912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:47.495 [2024-07-22 18:41:59.510670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.495 [2024-07-22 18:41:59.510688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:65920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:47.495 [2024-07-22 18:41:59.510703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.495 [2024-07-22 18:41:59.510720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:65928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:47.495 [2024-07-22 18:41:59.511023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.495 [2024-07-22 18:41:59.511058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:65936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:47.495 [2024-07-22 18:41:59.511074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.495 [2024-07-22 18:41:59.511092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:65944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:47.495 [2024-07-22 18:41:59.511107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.495 [2024-07-22 18:41:59.511125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:65320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.495 [2024-07-22 18:41:59.511139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.495 [2024-07-22 18:41:59.511459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:65328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.495 [2024-07-22 18:41:59.511479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.495 [2024-07-22 18:41:59.511497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:65336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.495 [2024-07-22 18:41:59.511513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.495 [2024-07-22 18:41:59.511530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:65344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.755 [2024-07-22 18:41:59.511851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.755 [2024-07-22 18:41:59.511888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:65352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.755 [2024-07-22 18:41:59.511905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.755 [2024-07-22 18:41:59.511923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:65360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.755 [2024-07-22 18:41:59.511938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.755 [2024-07-22 18:41:59.511956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:65368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.755 [2024-07-22 18:41:59.511971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.755 [2024-07-22 18:41:59.512205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:65376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.755 [2024-07-22 18:41:59.512223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.755 [2024-07-22 18:41:59.512242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:65384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.755 [2024-07-22 18:41:59.512257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.755 [2024-07-22 18:41:59.512278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:65392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.755 [2024-07-22 18:41:59.512529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.755 [2024-07-22 18:41:59.512551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:65400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.755 [2024-07-22 18:41:59.512568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.755 [2024-07-22 18:41:59.512718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:65408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.755 [2024-07-22 18:41:59.513015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.755 [2024-07-22 18:41:59.513041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:65416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.755 [2024-07-22 18:41:59.513057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.755 [2024-07-22 18:41:59.513076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:65424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.755 [2024-07-22 18:41:59.513096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.755 [2024-07-22 18:41:59.513224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:65432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.755 [2024-07-22 18:41:59.513387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.755 [2024-07-22 18:41:59.513507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:65440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.755 [2024-07-22 18:41:59.513525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.755 [2024-07-22 18:41:59.513545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:65448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.755 [2024-07-22 18:41:59.513560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.755 [2024-07-22 18:41:59.513577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:65456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.755 [2024-07-22 18:41:59.513845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.755 [2024-07-22 18:41:59.513878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:65464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.755 [2024-07-22 18:41:59.513895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.755 [2024-07-22 18:41:59.513914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:65472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.755 [2024-07-22 18:41:59.513928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.755 [2024-07-22 18:41:59.513945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:65480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.755 [2024-07-22 18:41:59.513959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.755 [2024-07-22 18:41:59.514223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:65488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.755 [2024-07-22 18:41:59.514242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.755 [2024-07-22 18:41:59.514260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:65496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.755 [2024-07-22 18:41:59.514274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.755 [2024-07-22 18:41:59.514291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:65504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.755 [2024-07-22 18:41:59.514555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.755 [2024-07-22 18:41:59.514645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:65512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.755 [2024-07-22 18:41:59.514665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.755 [2024-07-22 18:41:59.514682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:65520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.755 [2024-07-22 18:41:59.514697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.755 [2024-07-22 18:41:59.514716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:65528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.755 [2024-07-22 18:41:59.514730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.756 [2024-07-22 18:41:59.514982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:65536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.756 [2024-07-22 18:41:59.515003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.756 [2024-07-22 18:41:59.515021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:65544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.756 [2024-07-22 18:41:59.515037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.756 [2024-07-22 18:41:59.515056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:65552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.756 [2024-07-22 18:41:59.515323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.756 [2024-07-22 18:41:59.515344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:65560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.756 [2024-07-22 18:41:59.515359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.756 [2024-07-22 18:41:59.515378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:65952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:47.756 [2024-07-22 18:41:59.515393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.756 [2024-07-22 18:41:59.515522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:65960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:47.756 [2024-07-22 18:41:59.515668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.756 [2024-07-22 18:41:59.515796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:65968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:47.756 [2024-07-22 18:41:59.515820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.756 [2024-07-22 18:41:59.515935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:65976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:47.756 [2024-07-22 18:41:59.515951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.756 [2024-07-22 18:41:59.516217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:65984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:47.756 [2024-07-22 18:41:59.516250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.756 [2024-07-22 18:41:59.516271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:65992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:47.756 [2024-07-22 18:41:59.516287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.756 [2024-07-22 18:41:59.516304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:66000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:47.756 [2024-07-22 18:41:59.516319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.756 [2024-07-22 18:41:59.516541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:66008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:47.756 [2024-07-22 18:41:59.516567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.756 [2024-07-22 18:41:59.516587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:66016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:47.756 [2024-07-22 18:41:59.516602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.756 [2024-07-22 18:41:59.516620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:66024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:47.756 [2024-07-22 18:41:59.516635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.756 [2024-07-22 18:41:59.517017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:66032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:47.756 [2024-07-22 18:41:59.517049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.756 [2024-07-22 18:41:59.517070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:66040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:47.756 [2024-07-22 18:41:59.517085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.756 [2024-07-22 18:41:59.517102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:66048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:47.756 [2024-07-22 18:41:59.517117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.756 [2024-07-22 18:41:59.517237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:66056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:47.756 [2024-07-22 18:41:59.517400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.756 [2024-07-22 18:41:59.517520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:66064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:47.756 [2024-07-22 18:41:59.517539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.756 [2024-07-22 18:41:59.517556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:66072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:47.756 [2024-07-22 18:41:59.517571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.756 [2024-07-22 18:41:59.517589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:66080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:47.756 [2024-07-22 18:41:59.517877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.756 [2024-07-22 18:41:59.517905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:66088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:47.756 [2024-07-22 18:41:59.517920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.756 [2024-07-22 18:41:59.517939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:66096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:47.756 [2024-07-22 18:41:59.517953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.756 [2024-07-22 18:41:59.518209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:66104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:47.756 [2024-07-22 18:41:59.518239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.756 [2024-07-22 18:41:59.518259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:66112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:47.756 [2024-07-22 18:41:59.518274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.756 [2024-07-22 18:41:59.518291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:66120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:47.756 [2024-07-22 18:41:59.518306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.756 [2024-07-22 18:41:59.518323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:66128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:47.756 [2024-07-22 18:41:59.518446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.756 [2024-07-22 18:41:59.518616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:66136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:47.756 [2024-07-22 18:41:59.518721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.756 [2024-07-22 18:41:59.518746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:66144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:47.756 [2024-07-22 18:41:59.518762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.756 [2024-07-22 18:41:59.518779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:66152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:47.756 [2024-07-22 18:41:59.518794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.756 [2024-07-22 18:41:59.519001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:66160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:47.756 [2024-07-22 18:41:59.519029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.756 [2024-07-22 18:41:59.519050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:66168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:47.756 [2024-07-22 18:41:59.519066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.756 [2024-07-22 18:41:59.519083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:66176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:47.756 [2024-07-22 18:41:59.519097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.756 [2024-07-22 18:41:59.519222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:66184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:47.756 [2024-07-22 18:41:59.519376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.756 [2024-07-22 18:41:59.519527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:66192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:47.756 [2024-07-22 18:41:59.519634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.756 [2024-07-22 18:41:59.519658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:66200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:47.756 [2024-07-22 18:41:59.519673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.756 [2024-07-22 18:41:59.519691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:65568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.756 [2024-07-22 18:41:59.519706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.756 [2024-07-22 18:41:59.519846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:65576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.756 [2024-07-22 18:41:59.520131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.756 [2024-07-22 18:41:59.520245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:65584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.756 [2024-07-22 18:41:59.520265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.757 [2024-07-22 18:41:59.520283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:65592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.757 [2024-07-22 18:41:59.520298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.757 [2024-07-22 18:41:59.520315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:65600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.757 [2024-07-22 18:41:59.520702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.757 [2024-07-22 18:41:59.520740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:65608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.757 [2024-07-22 18:41:59.520758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.757 [2024-07-22 18:41:59.521089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:65616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.757 [2024-07-22 18:41:59.521179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.757 [2024-07-22 18:41:59.521199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:65624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.757 [2024-07-22 18:41:59.521214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.757 [2024-07-22 18:41:59.521233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:65632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.757 [2024-07-22 18:41:59.521247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.757 [2024-07-22 18:41:59.521473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:65640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.757 [2024-07-22 18:41:59.521500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.757 [2024-07-22 18:41:59.521520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:65648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.757 [2024-07-22 18:41:59.521535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.757 [2024-07-22 18:41:59.521553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:65656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.757 [2024-07-22 18:41:59.521567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.757 [2024-07-22 18:41:59.521813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:65664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.757 [2024-07-22 18:41:59.521851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.757 [2024-07-22 18:41:59.521873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:65672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.757 [2024-07-22 18:41:59.521892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.757 [2024-07-22 18:41:59.521920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:65680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.757 [2024-07-22 18:41:59.522065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.757 [2024-07-22 18:41:59.522165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:65688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.757 [2024-07-22 18:41:59.522182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.757 [2024-07-22 18:41:59.522201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:65696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.757 [2024-07-22 18:41:59.522215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.757 [2024-07-22 18:41:59.522233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:65704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.757 [2024-07-22 18:41:59.522246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.757 [2024-07-22 18:41:59.522515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:65712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.757 [2024-07-22 18:41:59.522532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.757 [2024-07-22 18:41:59.522551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:65720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.757 [2024-07-22 18:41:59.522565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.757 [2024-07-22 18:41:59.522798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:65728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.757 [2024-07-22 18:41:59.522825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.757 [2024-07-22 18:41:59.522860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:65736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.757 [2024-07-22 18:41:59.522878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.757 [2024-07-22 18:41:59.522896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:65744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.757 [2024-07-22 18:41:59.523016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.757 [2024-07-22 18:41:59.523151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:65752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.757 [2024-07-22 18:41:59.523269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.757 [2024-07-22 18:41:59.523300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:65760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.757 [2024-07-22 18:41:59.523424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.757 [2024-07-22 18:41:59.523537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:65768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.757 [2024-07-22 18:41:59.523563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.757 [2024-07-22 18:41:59.523583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:65776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.757 [2024-07-22 18:41:59.523848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.757 [2024-07-22 18:41:59.523872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:65784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.757 [2024-07-22 18:41:59.523888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.757 [2024-07-22 18:41:59.523906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:65792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.757 [2024-07-22 18:41:59.524027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.757 [2024-07-22 18:41:59.524175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:65800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.757 [2024-07-22 18:41:59.524305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.757 [2024-07-22 18:41:59.524569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:65808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.757 [2024-07-22 18:41:59.524699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.757 [2024-07-22 18:41:59.524844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:65816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.757 [2024-07-22 18:41:59.524970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.757 [2024-07-22 18:41:59.525104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:65824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.757 [2024-07-22 18:41:59.525131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.757 [2024-07-22 18:41:59.525235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:65832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.757 [2024-07-22 18:41:59.525252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.757 [2024-07-22 18:41:59.525502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:65840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.757 [2024-07-22 18:41:59.525538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.757 [2024-07-22 18:41:59.525561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:65848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.757 [2024-07-22 18:41:59.525576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.757 [2024-07-22 18:41:59.525594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:65856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.757 [2024-07-22 18:41:59.525710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.757 [2024-07-22 18:41:59.525734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:65864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.757 [2024-07-22 18:41:59.525750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.757 [2024-07-22 18:41:59.525876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:65872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.757 [2024-07-22 18:41:59.526140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.757 [2024-07-22 18:41:59.526176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:65880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.757 [2024-07-22 18:41:59.526194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.757 [2024-07-22 18:41:59.526212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:66208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:47.757 [2024-07-22 18:41:59.526226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.757 [2024-07-22 18:41:59.526244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:66216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:47.757 [2024-07-22 18:41:59.526356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.757 [2024-07-22 18:41:59.526627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:66224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:47.758 [2024-07-22 18:41:59.526657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.758 [2024-07-22 18:41:59.526677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:66232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:47.758 [2024-07-22 18:41:59.526692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.758 [2024-07-22 18:41:59.526709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:66240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:47.758 [2024-07-22 18:41:59.526723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.758 [2024-07-22 18:41:59.526974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:66248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:47.758 [2024-07-22 18:41:59.527236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.758 [2024-07-22 18:41:59.527385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:66256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:47.758 [2024-07-22 18:41:59.527506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.758 [2024-07-22 18:41:59.527535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:66264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:47.758 [2024-07-22 18:41:59.527662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.758 [2024-07-22 18:41:59.527932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:66272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:47.758 [2024-07-22 18:41:59.527960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.758 [2024-07-22 18:41:59.527980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:66280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:47.758 [2024-07-22 18:41:59.527995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.758 [2024-07-22 18:41:59.528014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:66288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:47.758 [2024-07-22 18:41:59.528028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.758 [2024-07-22 18:41:59.528276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:66296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:47.758 [2024-07-22 18:41:59.528386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.758 [2024-07-22 18:41:59.528416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:66304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:47.758 [2024-07-22 18:41:59.528536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.758 [2024-07-22 18:41:59.528567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:66312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:47.758 [2024-07-22 18:41:59.528707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.758 [2024-07-22 18:41:59.528966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:66320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:47.758 [2024-07-22 18:41:59.529096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.758 [2024-07-22 18:41:59.529232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:66328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:47.758 [2024-07-22 18:41:59.529352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.758 [2024-07-22 18:41:59.529630] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:47.758 [2024-07-22 18:41:59.529666] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:47.758 [2024-07-22 18:41:59.529684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66336 len:8 PRP1 0x0 PRP2 0x0 00:34:47.758 [2024-07-22 18:41:59.529700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.758 [2024-07-22 18:41:59.530300] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x61500002b780 was disconnected and freed. reset controller. 00:34:47.758 [2024-07-22 18:41:59.530660] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:47.758 [2024-07-22 18:41:59.530699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.758 [2024-07-22 18:41:59.530720] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:47.758 [2024-07-22 18:41:59.530734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.758 [2024-07-22 18:41:59.530749] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:47.758 [2024-07-22 18:41:59.530763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.758 [2024-07-22 18:41:59.530881] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:47.758 [2024-07-22 18:41:59.530909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.758 [2024-07-22 18:41:59.531144] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(5) to be set 00:34:47.758 [2024-07-22 18:41:59.531613] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:47.758 [2024-07-22 18:41:59.531688] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:34:47.758 [2024-07-22 18:41:59.532043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.758 [2024-07-22 18:41:59.532104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.2, port=4420 00:34:47.758 [2024-07-22 18:41:59.532123] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(5) to be set 00:34:47.758 [2024-07-22 18:41:59.532156] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:34:47.758 [2024-07-22 18:41:59.532269] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:47.758 [2024-07-22 18:41:59.532289] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:47.758 [2024-07-22 18:41:59.532512] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:47.758 [2024-07-22 18:41:59.532566] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:47.758 [2024-07-22 18:41:59.532586] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:47.758 18:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:34:48.692 [2024-07-22 18:42:00.532951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.692 [2024-07-22 18:42:00.533067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.2, port=4420 00:34:48.692 [2024-07-22 18:42:00.533095] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(5) to be set 00:34:48.692 [2024-07-22 18:42:00.533143] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:34:48.692 [2024-07-22 18:42:00.533177] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:48.692 [2024-07-22 18:42:00.533194] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:48.692 [2024-07-22 18:42:00.533212] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:48.692 [2024-07-22 18:42:00.533261] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:48.692 [2024-07-22 18:42:00.533281] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:49.625 [2024-07-22 18:42:01.533561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.625 [2024-07-22 18:42:01.533681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.2, port=4420 00:34:49.625 [2024-07-22 18:42:01.533711] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(5) to be set 00:34:49.625 [2024-07-22 18:42:01.533758] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:34:49.625 [2024-07-22 18:42:01.533790] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:49.625 [2024-07-22 18:42:01.533808] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:49.625 [2024-07-22 18:42:01.533826] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:49.625 [2024-07-22 18:42:01.533912] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:49.625 [2024-07-22 18:42:01.533934] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:50.558 [2024-07-22 18:42:02.537173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:50.558 [2024-07-22 18:42:02.537285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.2, port=4420 00:34:50.558 [2024-07-22 18:42:02.537312] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(5) to be set 00:34:50.558 [2024-07-22 18:42:02.537605] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:34:50.558 [2024-07-22 18:42:02.538285] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:50.559 [2024-07-22 18:42:02.538327] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:50.559 [2024-07-22 18:42:02.538348] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:50.559 18:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:50.559 [2024-07-22 18:42:02.542879] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:50.559 [2024-07-22 18:42:02.542928] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:50.816 [2024-07-22 18:42:02.809012] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:51.074 18:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@103 -- # wait 108007 00:34:51.640 [2024-07-22 18:42:03.604343] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:34:56.897 00:34:56.897 Latency(us) 00:34:56.897 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:56.897 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:34:56.897 Verification LBA range: start 0x0 length 0x4000 00:34:56.897 NVMe0n1 : 10.01 4026.14 15.73 3235.10 0.00 17586.82 860.16 3035150.89 00:34:56.897 =================================================================================================================== 00:34:56.897 Total : 4026.14 15.73 3235.10 0.00 17586.82 0.00 3035150.89 00:34:56.897 0 00:34:56.897 18:42:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 107853 00:34:56.897 18:42:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 107853 ']' 00:34:56.897 18:42:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 107853 00:34:56.897 18:42:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:34:56.897 18:42:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:56.897 18:42:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 107853 00:34:56.897 killing process with pid 107853 00:34:56.897 Received shutdown signal, test time was about 10.000000 seconds 00:34:56.897 00:34:56.897 Latency(us) 00:34:56.897 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:56.897 =================================================================================================================== 00:34:56.897 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:56.897 18:42:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:34:56.897 18:42:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:34:56.897 18:42:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 107853' 00:34:56.897 18:42:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 107853 00:34:56.897 18:42:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 107853 00:34:57.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:57.828 18:42:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=108140 00:34:57.828 18:42:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:34:57.828 18:42:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 108140 /var/tmp/bdevperf.sock 00:34:57.828 18:42:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 108140 ']' 00:34:57.828 18:42:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:57.828 18:42:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:57.828 18:42:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:57.828 18:42:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:57.828 18:42:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:34:57.828 [2024-07-22 18:42:09.806235] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:34:57.828 [2024-07-22 18:42:09.806791] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108140 ] 00:34:58.086 [2024-07-22 18:42:09.983446] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:58.343 [2024-07-22 18:42:10.256801] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:34:58.908 18:42:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:58.908 18:42:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:34:58.908 18:42:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=108167 00:34:58.908 18:42:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 108140 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:34:58.908 18:42:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:34:59.181 18:42:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:34:59.461 NVMe0n1 00:34:59.461 18:42:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=108216 00:34:59.461 18:42:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:34:59.461 18:42:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:34:59.461 Running I/O for 10 seconds... 00:35:00.394 18:42:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:00.655 [2024-07-22 18:42:12.538580] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:35:00.655 [2024-07-22 18:42:12.538657] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:35:00.655 [2024-07-22 18:42:12.538675] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:35:00.655 [2024-07-22 18:42:12.538688] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:35:00.655 [2024-07-22 18:42:12.538700] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:35:00.655 [2024-07-22 18:42:12.538713] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:35:00.655 [2024-07-22 18:42:12.538727] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:35:00.655 [2024-07-22 18:42:12.538740] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:35:00.655 [2024-07-22 18:42:12.538752] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:35:00.655 [2024-07-22 18:42:12.538765] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:35:00.655 [2024-07-22 18:42:12.538786] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:35:00.655 [2024-07-22 18:42:12.538799] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:35:00.655 [2024-07-22 18:42:12.538810] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:35:00.655 [2024-07-22 18:42:12.538823] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:35:00.655 [2024-07-22 18:42:12.538851] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:35:00.655 [2024-07-22 18:42:12.538866] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:35:00.655 [2024-07-22 18:42:12.538877] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:35:00.655 [2024-07-22 18:42:12.538889] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:35:00.655 [2024-07-22 18:42:12.538902] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:35:00.655 [2024-07-22 18:42:12.538914] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:35:00.655 [2024-07-22 18:42:12.538926] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:35:00.655 [2024-07-22 18:42:12.538946] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:35:00.655 [2024-07-22 18:42:12.538958] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:35:00.655 [2024-07-22 18:42:12.538971] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:35:00.655 [2024-07-22 18:42:12.538984] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:35:00.655 [2024-07-22 18:42:12.538997] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:35:00.655 [2024-07-22 18:42:12.539010] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:35:00.655 [2024-07-22 18:42:12.539022] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:35:00.655 [2024-07-22 18:42:12.540239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:48504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.655 [2024-07-22 18:42:12.540324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.655 [2024-07-22 18:42:12.540383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:34680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.655 [2024-07-22 18:42:12.540405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.655 [2024-07-22 18:42:12.540647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:47664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.655 [2024-07-22 18:42:12.540679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.655 [2024-07-22 18:42:12.540702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:88472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.655 [2024-07-22 18:42:12.540718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.655 [2024-07-22 18:42:12.540736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:81192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.655 [2024-07-22 18:42:12.540752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.655 [2024-07-22 18:42:12.540770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:129056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.655 [2024-07-22 18:42:12.540924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.655 [2024-07-22 18:42:12.541038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:57160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.655 [2024-07-22 18:42:12.541056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.655 [2024-07-22 18:42:12.541348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:41192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.655 [2024-07-22 18:42:12.541378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.655 [2024-07-22 18:42:12.541399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:64592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.655 [2024-07-22 18:42:12.541414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.655 [2024-07-22 18:42:12.541432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:40376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.655 [2024-07-22 18:42:12.541448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.655 [2024-07-22 18:42:12.541466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:121672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.655 [2024-07-22 18:42:12.541479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.655 [2024-07-22 18:42:12.541864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:108784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.655 [2024-07-22 18:42:12.541894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.655 [2024-07-22 18:42:12.541916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:53040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.655 [2024-07-22 18:42:12.541931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.655 [2024-07-22 18:42:12.541949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:68160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.655 [2024-07-22 18:42:12.541963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.655 [2024-07-22 18:42:12.541980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:12088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.655 [2024-07-22 18:42:12.542124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.655 [2024-07-22 18:42:12.542278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:10928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.655 [2024-07-22 18:42:12.542402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.655 [2024-07-22 18:42:12.542429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:18312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.655 [2024-07-22 18:42:12.542445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.655 [2024-07-22 18:42:12.542463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:123256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.655 [2024-07-22 18:42:12.542478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.655 [2024-07-22 18:42:12.542496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:115080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.655 [2024-07-22 18:42:12.542710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.655 [2024-07-22 18:42:12.542734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:120936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.655 [2024-07-22 18:42:12.542750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.655 [2024-07-22 18:42:12.542768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:58520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.655 [2024-07-22 18:42:12.542782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.655 [2024-07-22 18:42:12.543124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:34936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.656 [2024-07-22 18:42:12.543154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.656 [2024-07-22 18:42:12.543174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:93768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.656 [2024-07-22 18:42:12.543190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.656 [2024-07-22 18:42:12.543208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:44576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.656 [2024-07-22 18:42:12.543222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.656 [2024-07-22 18:42:12.543239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:107312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.656 [2024-07-22 18:42:12.543367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.656 [2024-07-22 18:42:12.543469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:79344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.656 [2024-07-22 18:42:12.543488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.656 [2024-07-22 18:42:12.543507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:71440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.656 [2024-07-22 18:42:12.543521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.656 [2024-07-22 18:42:12.543538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:41240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.656 [2024-07-22 18:42:12.543886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.656 [2024-07-22 18:42:12.543923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:126352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.656 [2024-07-22 18:42:12.543940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.656 [2024-07-22 18:42:12.543961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:66032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.656 [2024-07-22 18:42:12.543978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.656 [2024-07-22 18:42:12.543995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:86720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.656 [2024-07-22 18:42:12.544251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.656 [2024-07-22 18:42:12.544284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:69712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.656 [2024-07-22 18:42:12.544300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.656 [2024-07-22 18:42:12.544555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:76656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.656 [2024-07-22 18:42:12.544575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.656 [2024-07-22 18:42:12.544595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:54832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.656 [2024-07-22 18:42:12.544610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.656 [2024-07-22 18:42:12.544627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:15848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.656 [2024-07-22 18:42:12.544959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.656 [2024-07-22 18:42:12.544991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:5152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.656 [2024-07-22 18:42:12.545009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.656 [2024-07-22 18:42:12.545027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:37424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.656 [2024-07-22 18:42:12.545041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.656 [2024-07-22 18:42:12.545059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:73160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.656 [2024-07-22 18:42:12.545073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.656 [2024-07-22 18:42:12.545337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:27288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.656 [2024-07-22 18:42:12.545451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.656 [2024-07-22 18:42:12.545480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:110184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.656 [2024-07-22 18:42:12.545496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.656 [2024-07-22 18:42:12.545736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:87152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.656 [2024-07-22 18:42:12.545761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.656 [2024-07-22 18:42:12.546047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:62808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.656 [2024-07-22 18:42:12.546084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.656 [2024-07-22 18:42:12.546107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:118864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.656 [2024-07-22 18:42:12.546123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.656 [2024-07-22 18:42:12.546141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:72344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.656 [2024-07-22 18:42:12.546156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.656 [2024-07-22 18:42:12.546175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:70296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.656 [2024-07-22 18:42:12.546318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.656 [2024-07-22 18:42:12.546602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:88856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.656 [2024-07-22 18:42:12.546708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.656 [2024-07-22 18:42:12.546732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:64520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.656 [2024-07-22 18:42:12.546749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.656 [2024-07-22 18:42:12.547026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:56712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.656 [2024-07-22 18:42:12.547052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.656 [2024-07-22 18:42:12.547165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:43600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.656 [2024-07-22 18:42:12.547184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.656 [2024-07-22 18:42:12.547482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:40600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.656 [2024-07-22 18:42:12.547627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.656 [2024-07-22 18:42:12.547652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:16472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.656 [2024-07-22 18:42:12.547668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.656 [2024-07-22 18:42:12.547686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:5544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.656 [2024-07-22 18:42:12.547702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.656 [2024-07-22 18:42:12.547938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:124648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.656 [2024-07-22 18:42:12.547959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.656 [2024-07-22 18:42:12.547977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:127248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.656 [2024-07-22 18:42:12.547991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.656 [2024-07-22 18:42:12.548010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:116016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.656 [2024-07-22 18:42:12.548130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.656 [2024-07-22 18:42:12.548250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:68304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.656 [2024-07-22 18:42:12.548271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.656 [2024-07-22 18:42:12.548292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:127264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.656 [2024-07-22 18:42:12.548431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.656 [2024-07-22 18:42:12.548532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:5384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.656 [2024-07-22 18:42:12.548551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.656 [2024-07-22 18:42:12.548569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:129048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.656 [2024-07-22 18:42:12.548583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.656 [2024-07-22 18:42:12.548727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:106088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.656 [2024-07-22 18:42:12.548976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.656 [2024-07-22 18:42:12.549001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:130888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.656 [2024-07-22 18:42:12.549017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.657 [2024-07-22 18:42:12.549034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:110264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.657 [2024-07-22 18:42:12.549048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.657 [2024-07-22 18:42:12.549273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:72736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.657 [2024-07-22 18:42:12.549303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.657 [2024-07-22 18:42:12.549322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:80720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.657 [2024-07-22 18:42:12.549338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.657 [2024-07-22 18:42:12.549356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:91408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.657 [2024-07-22 18:42:12.549371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.657 [2024-07-22 18:42:12.549390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:111544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.657 [2024-07-22 18:42:12.549517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.657 [2024-07-22 18:42:12.549658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:104904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.657 [2024-07-22 18:42:12.549758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.657 [2024-07-22 18:42:12.549781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:116224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.657 [2024-07-22 18:42:12.549796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.657 [2024-07-22 18:42:12.550092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:99272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.657 [2024-07-22 18:42:12.550130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.657 [2024-07-22 18:42:12.550153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:64320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.657 [2024-07-22 18:42:12.550170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.657 [2024-07-22 18:42:12.550188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:102152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.657 [2024-07-22 18:42:12.550202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.657 [2024-07-22 18:42:12.550221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:93256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.657 [2024-07-22 18:42:12.550489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.657 [2024-07-22 18:42:12.550636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:105016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.657 [2024-07-22 18:42:12.550765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.657 [2024-07-22 18:42:12.550923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:28128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.657 [2024-07-22 18:42:12.551041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.657 [2024-07-22 18:42:12.551073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:98152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.657 [2024-07-22 18:42:12.551217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.657 [2024-07-22 18:42:12.551323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:112016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.657 [2024-07-22 18:42:12.551342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.657 [2024-07-22 18:42:12.551360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:62616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.657 [2024-07-22 18:42:12.551624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.657 [2024-07-22 18:42:12.551663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:30160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.657 [2024-07-22 18:42:12.551682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.657 [2024-07-22 18:42:12.551700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:115712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.657 [2024-07-22 18:42:12.551715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.657 [2024-07-22 18:42:12.551733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:12984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.657 [2024-07-22 18:42:12.551747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.657 [2024-07-22 18:42:12.551875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:64696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.657 [2024-07-22 18:42:12.551984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.657 [2024-07-22 18:42:12.552007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:20120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.657 [2024-07-22 18:42:12.552022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.657 [2024-07-22 18:42:12.552046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:39728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.657 [2024-07-22 18:42:12.552061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.657 [2024-07-22 18:42:12.552283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:85552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.657 [2024-07-22 18:42:12.552313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.657 [2024-07-22 18:42:12.552333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:59856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.657 [2024-07-22 18:42:12.552348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.657 [2024-07-22 18:42:12.552366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:30056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.657 [2024-07-22 18:42:12.552381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.657 [2024-07-22 18:42:12.552628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:129320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.657 [2024-07-22 18:42:12.552658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.657 [2024-07-22 18:42:12.552680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:39896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.657 [2024-07-22 18:42:12.552695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.657 [2024-07-22 18:42:12.552714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:4272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.657 [2024-07-22 18:42:12.552728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.657 [2024-07-22 18:42:12.552981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:129272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.657 [2024-07-22 18:42:12.553120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.657 [2024-07-22 18:42:12.553247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:46920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.657 [2024-07-22 18:42:12.553274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.657 [2024-07-22 18:42:12.553295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:92424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.657 [2024-07-22 18:42:12.553410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.657 [2024-07-22 18:42:12.553432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:119696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.657 [2024-07-22 18:42:12.553574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.657 [2024-07-22 18:42:12.553687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:92496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.657 [2024-07-22 18:42:12.553704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.657 [2024-07-22 18:42:12.553722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:58520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.657 [2024-07-22 18:42:12.553925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.657 [2024-07-22 18:42:12.553958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:11440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.657 [2024-07-22 18:42:12.553975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.657 [2024-07-22 18:42:12.553993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:94784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.657 [2024-07-22 18:42:12.554162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.657 [2024-07-22 18:42:12.554248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:124968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.657 [2024-07-22 18:42:12.554266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.657 [2024-07-22 18:42:12.554284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:46512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.657 [2024-07-22 18:42:12.554299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.657 [2024-07-22 18:42:12.554317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:22320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.658 [2024-07-22 18:42:12.554588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.658 [2024-07-22 18:42:12.554613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:23032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.658 [2024-07-22 18:42:12.554629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.658 [2024-07-22 18:42:12.554775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:120984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.658 [2024-07-22 18:42:12.554883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.658 [2024-07-22 18:42:12.554906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:62600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.658 [2024-07-22 18:42:12.554922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.658 [2024-07-22 18:42:12.555058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:8824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.658 [2024-07-22 18:42:12.555186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.658 [2024-07-22 18:42:12.555215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:103816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.658 [2024-07-22 18:42:12.555448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.658 [2024-07-22 18:42:12.555482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:36672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.658 [2024-07-22 18:42:12.555498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.658 [2024-07-22 18:42:12.555517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:86176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.658 [2024-07-22 18:42:12.555710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.658 [2024-07-22 18:42:12.555741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:57816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.658 [2024-07-22 18:42:12.555756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.658 [2024-07-22 18:42:12.555774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:48432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.658 [2024-07-22 18:42:12.555788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.658 [2024-07-22 18:42:12.556062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:33360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.658 [2024-07-22 18:42:12.556096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.658 [2024-07-22 18:42:12.556117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:28424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.658 [2024-07-22 18:42:12.556131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.658 [2024-07-22 18:42:12.556273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:53824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.658 [2024-07-22 18:42:12.556405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.658 [2024-07-22 18:42:12.556548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:67248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.658 [2024-07-22 18:42:12.556667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.658 [2024-07-22 18:42:12.556979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:39272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.658 [2024-07-22 18:42:12.557018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.658 [2024-07-22 18:42:12.557043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:39472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.658 [2024-07-22 18:42:12.557058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.658 [2024-07-22 18:42:12.557328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:76312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.658 [2024-07-22 18:42:12.557595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.658 [2024-07-22 18:42:12.557633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:111952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.658 [2024-07-22 18:42:12.557774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.658 [2024-07-22 18:42:12.557925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:105904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.658 [2024-07-22 18:42:12.558161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.658 [2024-07-22 18:42:12.558203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:4664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.658 [2024-07-22 18:42:12.558221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.658 [2024-07-22 18:42:12.558239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:35120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.658 [2024-07-22 18:42:12.558253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.658 [2024-07-22 18:42:12.558272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:10704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.658 [2024-07-22 18:42:12.558286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.658 [2024-07-22 18:42:12.558558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:27072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.658 [2024-07-22 18:42:12.558645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.658 [2024-07-22 18:42:12.558666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:89224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.658 [2024-07-22 18:42:12.558682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.658 [2024-07-22 18:42:12.558700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.658 [2024-07-22 18:42:12.558715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.658 [2024-07-22 18:42:12.558934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:111224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.658 [2024-07-22 18:42:12.558961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.658 [2024-07-22 18:42:12.558983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:121288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.658 [2024-07-22 18:42:12.558998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.658 [2024-07-22 18:42:12.559016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:29744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.658 [2024-07-22 18:42:12.559299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.658 [2024-07-22 18:42:12.559513] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:00.658 [2024-07-22 18:42:12.559622] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:00.658 [2024-07-22 18:42:12.559643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3176 len:8 PRP1 0x0 PRP2 0x0 00:35:00.658 [2024-07-22 18:42:12.559659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.658 [2024-07-22 18:42:12.560344] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x61500002b000 was disconnected and freed. reset controller. 00:35:00.658 [2024-07-22 18:42:12.560887] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:35:00.658 [2024-07-22 18:42:12.560934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.658 [2024-07-22 18:42:12.560957] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:35:00.658 [2024-07-22 18:42:12.560972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.658 [2024-07-22 18:42:12.560988] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:35:00.658 [2024-07-22 18:42:12.561002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.658 [2024-07-22 18:42:12.561017] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:35:00.658 [2024-07-22 18:42:12.561031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.658 [2024-07-22 18:42:12.561246] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(5) to be set 00:35:00.658 [2024-07-22 18:42:12.561781] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:00.658 [2024-07-22 18:42:12.561861] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:35:00.658 [2024-07-22 18:42:12.562266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:00.658 [2024-07-22 18:42:12.562318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.2, port=4420 00:35:00.658 [2024-07-22 18:42:12.562338] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(5) to be set 00:35:00.658 [2024-07-22 18:42:12.562371] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:35:00.658 [2024-07-22 18:42:12.562686] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:00.658 [2024-07-22 18:42:12.562716] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:00.658 [2024-07-22 18:42:12.562953] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:00.658 [2024-07-22 18:42:12.562997] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:00.658 [2024-07-22 18:42:12.563244] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:00.659 18:42:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@128 -- # wait 108216 00:35:02.560 [2024-07-22 18:42:14.563538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.560 [2024-07-22 18:42:14.563641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.2, port=4420 00:35:02.560 [2024-07-22 18:42:14.563670] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(5) to be set 00:35:02.560 [2024-07-22 18:42:14.563719] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:35:02.560 [2024-07-22 18:42:14.563753] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:02.560 [2024-07-22 18:42:14.563770] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:02.560 [2024-07-22 18:42:14.563788] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:02.560 [2024-07-22 18:42:14.563859] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:02.560 [2024-07-22 18:42:14.563884] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:05.115 [2024-07-22 18:42:16.564209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:05.115 [2024-07-22 18:42:16.564325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.2, port=4420 00:35:05.115 [2024-07-22 18:42:16.564352] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(5) to be set 00:35:05.115 [2024-07-22 18:42:16.564402] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:35:05.115 [2024-07-22 18:42:16.564436] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:05.115 [2024-07-22 18:42:16.564453] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:05.115 [2024-07-22 18:42:16.564472] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:05.115 [2024-07-22 18:42:16.564524] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:05.115 [2024-07-22 18:42:16.564544] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:07.015 [2024-07-22 18:42:18.564662] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:07.015 [2024-07-22 18:42:18.564776] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:07.015 [2024-07-22 18:42:18.564797] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:07.015 [2024-07-22 18:42:18.564821] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:35:07.015 [2024-07-22 18:42:18.564884] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:07.623 00:35:07.623 Latency(us) 00:35:07.623 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:07.623 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:35:07.623 NVMe0n1 : 8.17 1860.53 7.27 15.67 0.00 68242.25 4617.31 7046430.72 00:35:07.623 =================================================================================================================== 00:35:07.623 Total : 1860.53 7.27 15.67 0.00 68242.25 4617.31 7046430.72 00:35:07.623 0 00:35:07.623 18:42:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:35:07.623 Attaching 5 probes... 00:35:07.623 1317.341163: reset bdev controller NVMe0 00:35:07.623 1317.724642: reconnect bdev controller NVMe0 00:35:07.623 3318.876589: reconnect delay bdev controller NVMe0 00:35:07.623 3318.916091: reconnect bdev controller NVMe0 00:35:07.623 5319.527136: reconnect delay bdev controller NVMe0 00:35:07.623 5319.565484: reconnect bdev controller NVMe0 00:35:07.623 7320.178911: reconnect delay bdev controller NVMe0 00:35:07.623 7320.217614: reconnect bdev controller NVMe0 00:35:07.623 18:42:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:35:07.623 18:42:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:35:07.623 18:42:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@136 -- # kill 108167 00:35:07.623 18:42:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:35:07.623 18:42:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 108140 00:35:07.623 18:42:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 108140 ']' 00:35:07.623 18:42:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 108140 00:35:07.623 18:42:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:35:07.623 18:42:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:07.623 18:42:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 108140 00:35:07.623 killing process with pid 108140 00:35:07.623 Received shutdown signal, test time was about 8.232388 seconds 00:35:07.623 00:35:07.623 Latency(us) 00:35:07.623 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:07.623 =================================================================================================================== 00:35:07.623 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:07.623 18:42:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:35:07.623 18:42:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:35:07.623 18:42:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 108140' 00:35:07.623 18:42:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 108140 00:35:07.623 18:42:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 108140 00:35:08.991 18:42:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:09.557 18:42:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:35:09.557 18:42:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:35:09.557 18:42:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:35:09.557 18:42:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@117 -- # sync 00:35:09.557 18:42:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:35:09.557 18:42:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@120 -- # set +e 00:35:09.557 18:42:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:09.557 18:42:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:35:09.557 rmmod nvme_tcp 00:35:09.557 rmmod nvme_fabrics 00:35:09.557 rmmod nvme_keyring 00:35:09.557 18:42:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:09.557 18:42:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@124 -- # set -e 00:35:09.557 18:42:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@125 -- # return 0 00:35:09.557 18:42:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@489 -- # '[' -n 107553 ']' 00:35:09.557 18:42:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@490 -- # killprocess 107553 00:35:09.557 18:42:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 107553 ']' 00:35:09.557 18:42:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 107553 00:35:09.557 18:42:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:35:09.557 18:42:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:09.557 18:42:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 107553 00:35:09.557 killing process with pid 107553 00:35:09.557 18:42:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:35:09.557 18:42:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:35:09.557 18:42:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 107553' 00:35:09.557 18:42:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 107553 00:35:09.557 18:42:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 107553 00:35:11.466 18:42:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:35:11.466 18:42:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:35:11.466 18:42:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:35:11.466 18:42:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:11.466 18:42:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:35:11.466 18:42:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:11.466 18:42:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:11.466 18:42:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:11.466 18:42:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:35:11.466 00:35:11.466 real 0m52.518s 00:35:11.466 user 2m32.189s 00:35:11.466 sys 0m5.813s 00:35:11.466 18:42:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:11.466 18:42:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:35:11.466 ************************************ 00:35:11.466 END TEST nvmf_timeout 00:35:11.466 ************************************ 00:35:11.466 18:42:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:35:11.467 18:42:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ virt == phy ]] 00:35:11.467 18:42:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:35:11.467 00:35:11.467 real 7m18.736s 00:35:11.467 user 19m53.665s 00:35:11.467 sys 1m18.691s 00:35:11.467 18:42:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:11.467 18:42:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:11.467 ************************************ 00:35:11.467 END TEST nvmf_host 00:35:11.467 ************************************ 00:35:11.467 18:42:23 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:35:11.467 00:35:11.467 real 25m50.117s 00:35:11.467 user 76m8.689s 00:35:11.467 sys 4m47.445s 00:35:11.467 18:42:23 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:11.467 ************************************ 00:35:11.467 END TEST nvmf_tcp 00:35:11.467 ************************************ 00:35:11.467 18:42:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:11.467 18:42:23 -- common/autotest_common.sh@1142 -- # return 0 00:35:11.467 18:42:23 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:35:11.467 18:42:23 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:35:11.467 18:42:23 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:35:11.467 18:42:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:11.467 18:42:23 -- common/autotest_common.sh@10 -- # set +x 00:35:11.467 ************************************ 00:35:11.467 START TEST spdkcli_nvmf_tcp 00:35:11.467 ************************************ 00:35:11.467 18:42:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:35:11.467 * Looking for test storage... 00:35:11.467 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:35:11.467 18:42:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:35:11.467 18:42:23 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:35:11.467 18:42:23 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:35:11.467 18:42:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:35:11.467 18:42:23 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:35:11.467 18:42:23 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:11.467 18:42:23 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:11.467 18:42:23 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:11.467 18:42:23 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:11.467 18:42:23 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:11.467 18:42:23 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:11.467 18:42:23 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:11.467 18:42:23 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:11.467 18:42:23 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:11.467 18:42:23 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:11.467 18:42:23 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:35:11.467 18:42:23 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=0b8484e2-e129-4a11-8748-0b3c728771da 00:35:11.467 18:42:23 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:11.467 18:42:23 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:11.467 18:42:23 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:35:11.467 18:42:23 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:11.467 18:42:23 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:35:11.467 18:42:23 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:11.467 18:42:23 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:11.467 18:42:23 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:11.467 18:42:23 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:11.467 18:42:23 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:11.467 18:42:23 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:11.467 18:42:23 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:35:11.467 18:42:23 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:11.467 18:42:23 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:35:11.467 18:42:23 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:11.467 18:42:23 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:11.467 18:42:23 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:11.467 18:42:23 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:11.467 18:42:23 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:11.467 18:42:23 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:11.467 18:42:23 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:11.467 18:42:23 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:11.467 18:42:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:35:11.467 18:42:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:35:11.467 18:42:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:35:11.467 18:42:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:35:11.467 18:42:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:35:11.467 18:42:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:11.467 18:42:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:35:11.467 18:42:23 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=108452 00:35:11.467 18:42:23 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:35:11.467 18:42:23 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 108452 00:35:11.467 18:42:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@829 -- # '[' -z 108452 ']' 00:35:11.467 18:42:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:11.467 18:42:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:11.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:11.467 18:42:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:11.467 18:42:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:11.467 18:42:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:11.467 [2024-07-22 18:42:23.429552] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:35:11.467 [2024-07-22 18:42:23.429834] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108452 ] 00:35:11.724 [2024-07-22 18:42:23.621001] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:35:11.982 [2024-07-22 18:42:23.900829] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:11.982 [2024-07-22 18:42:23.900829] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:12.915 18:42:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:12.915 18:42:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # return 0 00:35:12.915 18:42:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:35:12.915 18:42:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:35:12.915 18:42:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:12.915 18:42:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:35:12.915 18:42:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:35:12.915 18:42:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:35:12.915 18:42:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:35:12.915 18:42:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:12.915 18:42:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:35:12.915 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:35:12.915 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:35:12.915 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:35:12.915 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:35:12.915 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:35:12.915 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:35:12.915 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:35:12.915 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:35:12.915 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:35:12.915 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:35:12.915 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:12.915 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:35:12.915 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:35:12.915 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:12.915 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:35:12.915 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:35:12.915 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:35:12.915 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:35:12.915 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:12.915 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:35:12.915 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:35:12.915 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:35:12.915 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:35:12.915 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:12.915 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:35:12.915 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:35:12.915 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:35:12.915 ' 00:35:16.199 [2024-07-22 18:42:27.485763] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:16.766 [2024-07-22 18:42:28.748303] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:35:19.296 [2024-07-22 18:42:31.142282] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:35:21.297 [2024-07-22 18:42:33.199882] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:35:23.201 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:35:23.201 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:35:23.201 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:35:23.201 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:35:23.201 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:35:23.201 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:35:23.201 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:35:23.201 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:35:23.201 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:35:23.201 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:35:23.201 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:35:23.201 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:23.201 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:35:23.201 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:35:23.201 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:23.201 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:35:23.201 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:35:23.201 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:35:23.201 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:35:23.201 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:23.201 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:35:23.201 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:35:23.201 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:35:23.201 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:35:23.201 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:23.201 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:35:23.201 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:35:23.201 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:35:23.202 18:42:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:35:23.202 18:42:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:35:23.202 18:42:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:23.202 18:42:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:35:23.202 18:42:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:35:23.202 18:42:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:23.202 18:42:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:35:23.202 18:42:34 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /nvmf 00:35:23.768 18:42:35 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:35:23.768 18:42:35 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:35:23.768 18:42:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:35:23.768 18:42:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:35:23.768 18:42:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:23.768 18:42:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:35:23.768 18:42:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:35:23.768 18:42:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:23.768 18:42:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:35:23.768 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:35:23.768 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:35:23.768 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:35:23.768 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:35:23.768 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:35:23.768 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:35:23.768 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:35:23.768 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:35:23.768 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:35:23.768 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:35:23.768 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:35:23.768 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:35:23.768 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:35:23.768 ' 00:35:30.324 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:35:30.324 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:35:30.324 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:35:30.324 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:35:30.324 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:35:30.324 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:35:30.324 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:35:30.324 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:35:30.324 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:35:30.324 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:35:30.324 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:35:30.324 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:35:30.324 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:35:30.324 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:35:30.324 18:42:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:35:30.324 18:42:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:35:30.324 18:42:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:30.324 18:42:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 108452 00:35:30.324 18:42:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 108452 ']' 00:35:30.324 18:42:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 108452 00:35:30.324 18:42:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # uname 00:35:30.324 18:42:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:30.324 18:42:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 108452 00:35:30.324 18:42:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:35:30.324 18:42:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:35:30.324 18:42:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 108452' 00:35:30.324 killing process with pid 108452 00:35:30.324 18:42:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # kill 108452 00:35:30.324 18:42:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # wait 108452 00:35:30.890 18:42:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:35:30.890 18:42:42 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:35:30.890 18:42:42 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 108452 ']' 00:35:30.890 18:42:42 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 108452 00:35:30.890 18:42:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 108452 ']' 00:35:30.890 18:42:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 108452 00:35:30.890 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (108452) - No such process 00:35:30.890 Process with pid 108452 is not found 00:35:30.890 18:42:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@975 -- # echo 'Process with pid 108452 is not found' 00:35:30.890 18:42:42 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:35:30.890 18:42:42 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:35:30.890 18:42:42 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_nvmf.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:35:30.890 00:35:30.890 real 0m19.713s 00:35:30.890 user 0m41.747s 00:35:30.890 sys 0m1.406s 00:35:30.890 18:42:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:30.890 18:42:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:30.890 ************************************ 00:35:30.890 END TEST spdkcli_nvmf_tcp 00:35:30.890 ************************************ 00:35:31.149 18:42:42 -- common/autotest_common.sh@1142 -- # return 0 00:35:31.149 18:42:42 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:35:31.149 18:42:42 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:35:31.149 18:42:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:31.149 18:42:42 -- common/autotest_common.sh@10 -- # set +x 00:35:31.149 ************************************ 00:35:31.149 START TEST nvmf_identify_passthru 00:35:31.149 ************************************ 00:35:31.149 18:42:42 nvmf_identify_passthru -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:35:31.149 * Looking for test storage... 00:35:31.149 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:35:31.149 18:42:43 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:35:31.149 18:42:43 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:35:31.149 18:42:43 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:31.149 18:42:43 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:31.149 18:42:43 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:31.149 18:42:43 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:31.149 18:42:43 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:31.149 18:42:43 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:31.149 18:42:43 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:31.149 18:42:43 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:31.149 18:42:43 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:31.149 18:42:43 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:31.149 18:42:43 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:35:31.149 18:42:43 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=0b8484e2-e129-4a11-8748-0b3c728771da 00:35:31.149 18:42:43 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:31.149 18:42:43 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:31.149 18:42:43 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:35:31.149 18:42:43 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:31.149 18:42:43 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:35:31.149 18:42:43 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:31.149 18:42:43 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:31.149 18:42:43 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:31.149 18:42:43 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:31.149 18:42:43 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:31.149 18:42:43 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:31.149 18:42:43 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:35:31.149 18:42:43 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:31.149 18:42:43 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:35:31.149 18:42:43 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:31.149 18:42:43 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:31.149 18:42:43 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:31.149 18:42:43 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:31.149 18:42:43 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:31.149 18:42:43 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:31.149 18:42:43 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:31.149 18:42:43 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:31.149 18:42:43 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:35:31.149 18:42:43 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:31.149 18:42:43 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:31.149 18:42:43 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:31.149 18:42:43 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:31.149 18:42:43 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:31.149 18:42:43 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:31.149 18:42:43 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:35:31.149 18:42:43 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:31.149 18:42:43 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:35:31.149 18:42:43 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:35:31.149 18:42:43 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:31.149 18:42:43 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:35:31.149 18:42:43 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:35:31.149 18:42:43 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:35:31.149 18:42:43 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:31.149 18:42:43 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:31.149 18:42:43 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:31.149 18:42:43 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:35:31.149 18:42:43 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:35:31.149 18:42:43 nvmf_identify_passthru -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:35:31.149 18:42:43 nvmf_identify_passthru -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:35:31.149 18:42:43 nvmf_identify_passthru -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:35:31.149 18:42:43 nvmf_identify_passthru -- nvmf/common.sh@432 -- # nvmf_veth_init 00:35:31.149 18:42:43 nvmf_identify_passthru -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:31.149 18:42:43 nvmf_identify_passthru -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:31.149 18:42:43 nvmf_identify_passthru -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:35:31.149 18:42:43 nvmf_identify_passthru -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:35:31.149 18:42:43 nvmf_identify_passthru -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:35:31.149 18:42:43 nvmf_identify_passthru -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:35:31.149 18:42:43 nvmf_identify_passthru -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:35:31.149 18:42:43 nvmf_identify_passthru -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:31.149 18:42:43 nvmf_identify_passthru -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:35:31.149 18:42:43 nvmf_identify_passthru -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:35:31.149 18:42:43 nvmf_identify_passthru -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:35:31.149 18:42:43 nvmf_identify_passthru -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:35:31.149 18:42:43 nvmf_identify_passthru -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:35:31.150 18:42:43 nvmf_identify_passthru -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:35:31.150 Cannot find device "nvmf_tgt_br" 00:35:31.150 18:42:43 nvmf_identify_passthru -- nvmf/common.sh@155 -- # true 00:35:31.150 18:42:43 nvmf_identify_passthru -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:35:31.150 Cannot find device "nvmf_tgt_br2" 00:35:31.150 18:42:43 nvmf_identify_passthru -- nvmf/common.sh@156 -- # true 00:35:31.150 18:42:43 nvmf_identify_passthru -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:35:31.150 18:42:43 nvmf_identify_passthru -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:35:31.150 Cannot find device "nvmf_tgt_br" 00:35:31.150 18:42:43 nvmf_identify_passthru -- nvmf/common.sh@158 -- # true 00:35:31.150 18:42:43 nvmf_identify_passthru -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:35:31.150 Cannot find device "nvmf_tgt_br2" 00:35:31.150 18:42:43 nvmf_identify_passthru -- nvmf/common.sh@159 -- # true 00:35:31.150 18:42:43 nvmf_identify_passthru -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:35:31.150 18:42:43 nvmf_identify_passthru -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:35:31.150 18:42:43 nvmf_identify_passthru -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:35:31.150 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:35:31.150 18:42:43 nvmf_identify_passthru -- nvmf/common.sh@162 -- # true 00:35:31.150 18:42:43 nvmf_identify_passthru -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:35:31.408 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:35:31.408 18:42:43 nvmf_identify_passthru -- nvmf/common.sh@163 -- # true 00:35:31.408 18:42:43 nvmf_identify_passthru -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:35:31.408 18:42:43 nvmf_identify_passthru -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:35:31.408 18:42:43 nvmf_identify_passthru -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:35:31.408 18:42:43 nvmf_identify_passthru -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:35:31.408 18:42:43 nvmf_identify_passthru -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:35:31.408 18:42:43 nvmf_identify_passthru -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:35:31.408 18:42:43 nvmf_identify_passthru -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:35:31.408 18:42:43 nvmf_identify_passthru -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:35:31.408 18:42:43 nvmf_identify_passthru -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:35:31.408 18:42:43 nvmf_identify_passthru -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:35:31.408 18:42:43 nvmf_identify_passthru -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:35:31.408 18:42:43 nvmf_identify_passthru -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:35:31.408 18:42:43 nvmf_identify_passthru -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:35:31.408 18:42:43 nvmf_identify_passthru -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:35:31.408 18:42:43 nvmf_identify_passthru -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:35:31.408 18:42:43 nvmf_identify_passthru -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:35:31.408 18:42:43 nvmf_identify_passthru -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:35:31.408 18:42:43 nvmf_identify_passthru -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:35:31.408 18:42:43 nvmf_identify_passthru -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:35:31.408 18:42:43 nvmf_identify_passthru -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:35:31.408 18:42:43 nvmf_identify_passthru -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:35:31.408 18:42:43 nvmf_identify_passthru -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:35:31.408 18:42:43 nvmf_identify_passthru -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:35:31.408 18:42:43 nvmf_identify_passthru -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:35:31.408 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:31.408 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.093 ms 00:35:31.408 00:35:31.408 --- 10.0.0.2 ping statistics --- 00:35:31.408 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:31.408 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:35:31.408 18:42:43 nvmf_identify_passthru -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:35:31.409 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:35:31.409 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.036 ms 00:35:31.409 00:35:31.409 --- 10.0.0.3 ping statistics --- 00:35:31.409 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:31.409 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:35:31.409 18:42:43 nvmf_identify_passthru -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:35:31.409 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:31.409 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.079 ms 00:35:31.409 00:35:31.409 --- 10.0.0.1 ping statistics --- 00:35:31.409 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:31.409 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:35:31.409 18:42:43 nvmf_identify_passthru -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:31.409 18:42:43 nvmf_identify_passthru -- nvmf/common.sh@433 -- # return 0 00:35:31.409 18:42:43 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:35:31.409 18:42:43 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:31.409 18:42:43 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:35:31.409 18:42:43 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:35:31.409 18:42:43 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:31.409 18:42:43 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:35:31.409 18:42:43 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:35:31.409 18:42:43 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:35:31.409 18:42:43 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:35:31.409 18:42:43 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:31.409 18:42:43 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:35:31.409 18:42:43 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:35:31.409 18:42:43 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:35:31.409 18:42:43 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:35:31.409 18:42:43 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:35:31.409 18:42:43 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:35:31.409 18:42:43 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:35:31.409 18:42:43 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:35:31.409 18:42:43 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:35:31.409 18:42:43 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:35:31.666 18:42:43 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:35:31.666 18:42:43 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:35:31.666 18:42:43 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:00:10.0 00:35:31.666 18:42:43 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:00:10.0 00:35:31.666 18:42:43 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:00:10.0 ']' 00:35:31.666 18:42:43 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:35:31.666 18:42:43 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:35:31.666 18:42:43 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:35:31.938 18:42:43 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=12340 00:35:31.938 18:42:43 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:35:31.938 18:42:43 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:35:31.938 18:42:43 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:35:32.206 18:42:44 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=QEMU 00:35:32.206 18:42:44 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:35:32.206 18:42:44 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:35:32.206 18:42:44 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:32.206 18:42:44 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:35:32.206 18:42:44 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:35:32.206 18:42:44 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:32.206 18:42:44 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=108956 00:35:32.206 18:42:44 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:35:32.206 18:42:44 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:35:32.206 18:42:44 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 108956 00:35:32.206 18:42:44 nvmf_identify_passthru -- common/autotest_common.sh@829 -- # '[' -z 108956 ']' 00:35:32.206 18:42:44 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:32.206 18:42:44 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:32.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:32.206 18:42:44 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:32.206 18:42:44 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:32.206 18:42:44 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:32.206 [2024-07-22 18:42:44.170603] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:35:32.206 [2024-07-22 18:42:44.170756] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:32.464 [2024-07-22 18:42:44.351378] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:32.722 [2024-07-22 18:42:44.679042] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:32.722 [2024-07-22 18:42:44.679114] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:32.722 [2024-07-22 18:42:44.679132] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:32.722 [2024-07-22 18:42:44.679147] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:32.722 [2024-07-22 18:42:44.679159] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:32.722 [2024-07-22 18:42:44.679453] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:32.722 [2024-07-22 18:42:44.680170] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:35:32.722 [2024-07-22 18:42:44.680286] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:32.723 [2024-07-22 18:42:44.680290] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:35:33.288 18:42:45 nvmf_identify_passthru -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:33.288 18:42:45 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # return 0 00:35:33.288 18:42:45 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:35:33.288 18:42:45 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:33.288 18:42:45 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:33.288 18:42:45 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:33.288 18:42:45 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:35:33.288 18:42:45 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:33.288 18:42:45 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:33.546 [2024-07-22 18:42:45.461505] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:35:33.546 18:42:45 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:33.546 18:42:45 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:33.546 18:42:45 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:33.546 18:42:45 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:33.546 [2024-07-22 18:42:45.474317] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:33.546 18:42:45 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:33.546 18:42:45 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:35:33.546 18:42:45 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:35:33.546 18:42:45 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:33.546 18:42:45 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:35:33.546 18:42:45 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:33.546 18:42:45 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:33.804 Nvme0n1 00:35:33.804 18:42:45 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:33.804 18:42:45 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:35:33.804 18:42:45 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:33.804 18:42:45 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:33.804 18:42:45 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:33.804 18:42:45 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:35:33.804 18:42:45 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:33.804 18:42:45 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:33.804 18:42:45 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:33.804 18:42:45 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:33.804 18:42:45 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:33.804 18:42:45 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:33.804 [2024-07-22 18:42:45.641810] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:33.804 18:42:45 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:33.804 18:42:45 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:35:33.804 18:42:45 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:33.804 18:42:45 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:33.804 [ 00:35:33.804 { 00:35:33.804 "allow_any_host": true, 00:35:33.804 "hosts": [], 00:35:33.804 "listen_addresses": [], 00:35:33.804 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:35:33.804 "subtype": "Discovery" 00:35:33.804 }, 00:35:33.804 { 00:35:33.804 "allow_any_host": true, 00:35:33.804 "hosts": [], 00:35:33.804 "listen_addresses": [ 00:35:33.804 { 00:35:33.804 "adrfam": "IPv4", 00:35:33.804 "traddr": "10.0.0.2", 00:35:33.804 "trsvcid": "4420", 00:35:33.804 "trtype": "TCP" 00:35:33.804 } 00:35:33.804 ], 00:35:33.804 "max_cntlid": 65519, 00:35:33.804 "max_namespaces": 1, 00:35:33.804 "min_cntlid": 1, 00:35:33.804 "model_number": "SPDK bdev Controller", 00:35:33.804 "namespaces": [ 00:35:33.804 { 00:35:33.804 "bdev_name": "Nvme0n1", 00:35:33.804 "name": "Nvme0n1", 00:35:33.804 "nguid": "3A980383BC9A4CAAA3E9AEF0C3DD0995", 00:35:33.804 "nsid": 1, 00:35:33.804 "uuid": "3a980383-bc9a-4caa-a3e9-aef0c3dd0995" 00:35:33.804 } 00:35:33.804 ], 00:35:33.804 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:35:33.804 "serial_number": "SPDK00000000000001", 00:35:33.804 "subtype": "NVMe" 00:35:33.804 } 00:35:33.804 ] 00:35:33.804 18:42:45 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:33.804 18:42:45 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:35:33.804 18:42:45 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:35:33.804 18:42:45 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:35:34.062 18:42:46 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=12340 00:35:34.062 18:42:46 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:35:34.062 18:42:46 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:35:34.062 18:42:46 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:35:34.643 18:42:46 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=QEMU 00:35:34.643 18:42:46 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' 12340 '!=' 12340 ']' 00:35:34.643 18:42:46 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' QEMU '!=' QEMU ']' 00:35:34.643 18:42:46 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:34.643 18:42:46 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.643 18:42:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:34.643 18:42:46 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.643 18:42:46 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:35:34.643 18:42:46 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:35:34.643 18:42:46 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:35:34.643 18:42:46 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:35:34.643 18:42:46 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:35:34.643 18:42:46 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:35:34.643 18:42:46 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:34.643 18:42:46 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:35:34.643 rmmod nvme_tcp 00:35:34.643 rmmod nvme_fabrics 00:35:34.643 rmmod nvme_keyring 00:35:34.643 18:42:46 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:34.643 18:42:46 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:35:34.643 18:42:46 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:35:34.643 18:42:46 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 108956 ']' 00:35:34.643 18:42:46 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 108956 00:35:34.643 18:42:46 nvmf_identify_passthru -- common/autotest_common.sh@948 -- # '[' -z 108956 ']' 00:35:34.643 18:42:46 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # kill -0 108956 00:35:34.643 18:42:46 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # uname 00:35:34.643 18:42:46 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:34.643 18:42:46 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 108956 00:35:34.643 killing process with pid 108956 00:35:34.643 18:42:46 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:35:34.643 18:42:46 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:35:34.643 18:42:46 nvmf_identify_passthru -- common/autotest_common.sh@966 -- # echo 'killing process with pid 108956' 00:35:34.643 18:42:46 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # kill 108956 00:35:34.643 18:42:46 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # wait 108956 00:35:36.027 18:42:47 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:35:36.027 18:42:47 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:35:36.027 18:42:47 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:35:36.027 18:42:47 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:36.027 18:42:47 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:35:36.027 18:42:47 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:36.027 18:42:47 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:36.027 18:42:47 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:36.027 18:42:47 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:35:36.027 00:35:36.027 real 0m4.869s 00:35:36.027 user 0m11.614s 00:35:36.027 sys 0m1.234s 00:35:36.027 18:42:47 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:36.027 ************************************ 00:35:36.027 END TEST nvmf_identify_passthru 00:35:36.027 ************************************ 00:35:36.027 18:42:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:36.027 18:42:47 -- common/autotest_common.sh@1142 -- # return 0 00:35:36.027 18:42:47 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:35:36.027 18:42:47 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:35:36.027 18:42:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:36.027 18:42:47 -- common/autotest_common.sh@10 -- # set +x 00:35:36.027 ************************************ 00:35:36.027 START TEST nvmf_dif 00:35:36.027 ************************************ 00:35:36.027 18:42:47 nvmf_dif -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:35:36.027 * Looking for test storage... 00:35:36.027 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:35:36.027 18:42:47 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:35:36.027 18:42:47 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:35:36.027 18:42:47 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:36.027 18:42:47 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:36.027 18:42:47 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:36.027 18:42:47 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:36.027 18:42:47 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:36.027 18:42:47 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:36.027 18:42:47 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:36.027 18:42:47 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:36.027 18:42:47 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:36.027 18:42:47 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:36.027 18:42:47 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:35:36.027 18:42:47 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=0b8484e2-e129-4a11-8748-0b3c728771da 00:35:36.027 18:42:47 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:36.027 18:42:47 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:36.027 18:42:47 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:35:36.027 18:42:47 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:36.027 18:42:47 nvmf_dif -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:35:36.027 18:42:47 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:36.027 18:42:47 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:36.027 18:42:47 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:36.027 18:42:47 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:36.027 18:42:47 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:36.027 18:42:47 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:36.027 18:42:47 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:35:36.027 18:42:47 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:36.027 18:42:47 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:35:36.027 18:42:47 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:36.027 18:42:47 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:36.027 18:42:47 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:36.027 18:42:47 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:36.027 18:42:47 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:36.027 18:42:47 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:36.027 18:42:47 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:36.027 18:42:47 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:36.027 18:42:47 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:35:36.027 18:42:47 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:35:36.027 18:42:47 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:35:36.027 18:42:47 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:35:36.027 18:42:47 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:35:36.027 18:42:47 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:35:36.027 18:42:47 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:36.027 18:42:47 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:35:36.027 18:42:47 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:35:36.027 18:42:47 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:35:36.028 18:42:47 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:36.028 18:42:47 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:36.028 18:42:47 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:36.028 18:42:47 nvmf_dif -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:35:36.028 18:42:47 nvmf_dif -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:35:36.028 18:42:47 nvmf_dif -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:35:36.028 18:42:47 nvmf_dif -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:35:36.028 18:42:47 nvmf_dif -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:35:36.028 18:42:47 nvmf_dif -- nvmf/common.sh@432 -- # nvmf_veth_init 00:35:36.028 18:42:47 nvmf_dif -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:36.028 18:42:47 nvmf_dif -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:36.028 18:42:47 nvmf_dif -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:35:36.028 18:42:47 nvmf_dif -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:35:36.028 18:42:47 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:35:36.028 18:42:47 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:35:36.028 18:42:47 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:35:36.028 18:42:47 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:36.028 18:42:47 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:35:36.028 18:42:47 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:35:36.028 18:42:47 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:35:36.028 18:42:47 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:35:36.028 18:42:47 nvmf_dif -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:35:36.028 18:42:47 nvmf_dif -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:35:36.028 Cannot find device "nvmf_tgt_br" 00:35:36.028 18:42:47 nvmf_dif -- nvmf/common.sh@155 -- # true 00:35:36.028 18:42:47 nvmf_dif -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:35:36.028 Cannot find device "nvmf_tgt_br2" 00:35:36.028 18:42:47 nvmf_dif -- nvmf/common.sh@156 -- # true 00:35:36.028 18:42:47 nvmf_dif -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:35:36.028 18:42:47 nvmf_dif -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:35:36.028 Cannot find device "nvmf_tgt_br" 00:35:36.028 18:42:48 nvmf_dif -- nvmf/common.sh@158 -- # true 00:35:36.028 18:42:48 nvmf_dif -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:35:36.028 Cannot find device "nvmf_tgt_br2" 00:35:36.028 18:42:48 nvmf_dif -- nvmf/common.sh@159 -- # true 00:35:36.028 18:42:48 nvmf_dif -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:35:36.286 18:42:48 nvmf_dif -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:35:36.286 18:42:48 nvmf_dif -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:35:36.286 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:35:36.286 18:42:48 nvmf_dif -- nvmf/common.sh@162 -- # true 00:35:36.286 18:42:48 nvmf_dif -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:35:36.286 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:35:36.286 18:42:48 nvmf_dif -- nvmf/common.sh@163 -- # true 00:35:36.286 18:42:48 nvmf_dif -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:35:36.286 18:42:48 nvmf_dif -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:35:36.286 18:42:48 nvmf_dif -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:35:36.286 18:42:48 nvmf_dif -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:35:36.286 18:42:48 nvmf_dif -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:35:36.286 18:42:48 nvmf_dif -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:35:36.286 18:42:48 nvmf_dif -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:35:36.286 18:42:48 nvmf_dif -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:35:36.286 18:42:48 nvmf_dif -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:35:36.286 18:42:48 nvmf_dif -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:35:36.287 18:42:48 nvmf_dif -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:35:36.287 18:42:48 nvmf_dif -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:35:36.287 18:42:48 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:35:36.287 18:42:48 nvmf_dif -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:35:36.287 18:42:48 nvmf_dif -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:35:36.287 18:42:48 nvmf_dif -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:35:36.287 18:42:48 nvmf_dif -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:35:36.287 18:42:48 nvmf_dif -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:35:36.287 18:42:48 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:35:36.287 18:42:48 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:35:36.287 18:42:48 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:35:36.287 18:42:48 nvmf_dif -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:35:36.287 18:42:48 nvmf_dif -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:35:36.287 18:42:48 nvmf_dif -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:35:36.287 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:36.287 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.099 ms 00:35:36.287 00:35:36.287 --- 10.0.0.2 ping statistics --- 00:35:36.287 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:36.287 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:35:36.287 18:42:48 nvmf_dif -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:35:36.287 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:35:36.287 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:35:36.287 00:35:36.287 --- 10.0.0.3 ping statistics --- 00:35:36.287 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:36.287 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:35:36.287 18:42:48 nvmf_dif -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:35:36.287 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:36.287 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:35:36.287 00:35:36.287 --- 10.0.0.1 ping statistics --- 00:35:36.287 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:36.287 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:35:36.287 18:42:48 nvmf_dif -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:36.287 18:42:48 nvmf_dif -- nvmf/common.sh@433 -- # return 0 00:35:36.287 18:42:48 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:35:36.287 18:42:48 nvmf_dif -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:35:36.544 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:35:36.803 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:35:36.803 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:35:36.803 18:42:48 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:36.803 18:42:48 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:35:36.803 18:42:48 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:35:36.803 18:42:48 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:36.803 18:42:48 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:35:36.803 18:42:48 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:35:36.803 18:42:48 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:35:36.803 18:42:48 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:35:36.803 18:42:48 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:36.803 18:42:48 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:35:36.803 18:42:48 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:36.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:36.803 18:42:48 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=109339 00:35:36.803 18:42:48 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:35:36.803 18:42:48 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 109339 00:35:36.803 18:42:48 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 109339 ']' 00:35:36.803 18:42:48 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:36.803 18:42:48 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:36.803 18:42:48 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:36.803 18:42:48 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:36.803 18:42:48 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:36.803 [2024-07-22 18:42:48.772094] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:35:36.803 [2024-07-22 18:42:48.772809] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:37.062 [2024-07-22 18:42:48.951743] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:37.321 [2024-07-22 18:42:49.323727] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:37.321 [2024-07-22 18:42:49.324160] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:37.321 [2024-07-22 18:42:49.324316] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:37.321 [2024-07-22 18:42:49.324457] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:37.321 [2024-07-22 18:42:49.324480] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:37.321 [2024-07-22 18:42:49.324545] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:37.887 18:42:49 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:37.887 18:42:49 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:35:37.888 18:42:49 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:37.888 18:42:49 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:35:37.888 18:42:49 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:37.888 18:42:49 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:37.888 18:42:49 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:35:37.888 18:42:49 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:35:37.888 18:42:49 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:37.888 18:42:49 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:37.888 [2024-07-22 18:42:49.766620] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:37.888 18:42:49 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:37.888 18:42:49 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:35:37.888 18:42:49 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:35:37.888 18:42:49 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:37.888 18:42:49 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:37.888 ************************************ 00:35:37.888 START TEST fio_dif_1_default 00:35:37.888 ************************************ 00:35:37.888 18:42:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:35:37.888 18:42:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:35:37.888 18:42:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:35:37.888 18:42:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:35:37.888 18:42:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:35:37.888 18:42:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:35:37.888 18:42:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:37.888 18:42:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:37.888 18:42:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:37.888 bdev_null0 00:35:37.888 18:42:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:37.888 18:42:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:37.888 18:42:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:37.888 18:42:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:37.888 18:42:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:37.888 18:42:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:37.888 18:42:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:37.888 18:42:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:37.888 18:42:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:37.888 18:42:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:37.888 18:42:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:37.888 18:42:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:37.888 [2024-07-22 18:42:49.810811] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:37.888 18:42:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:37.888 18:42:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:35:37.888 18:42:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:35:37.888 18:42:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:37.888 18:42:49 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:35:37.888 18:42:49 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:35:37.888 18:42:49 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:37.888 18:42:49 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:37.888 { 00:35:37.888 "params": { 00:35:37.888 "name": "Nvme$subsystem", 00:35:37.888 "trtype": "$TEST_TRANSPORT", 00:35:37.888 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:37.888 "adrfam": "ipv4", 00:35:37.888 "trsvcid": "$NVMF_PORT", 00:35:37.888 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:37.888 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:37.888 "hdgst": ${hdgst:-false}, 00:35:37.888 "ddgst": ${ddgst:-false} 00:35:37.888 }, 00:35:37.888 "method": "bdev_nvme_attach_controller" 00:35:37.888 } 00:35:37.888 EOF 00:35:37.888 )") 00:35:37.888 18:42:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:37.888 18:42:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:37.888 18:42:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:35:37.888 18:42:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:35:37.888 18:42:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:37.888 18:42:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:35:37.888 18:42:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:35:37.888 18:42:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:35:37.888 18:42:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:35:37.888 18:42:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:35:37.888 18:42:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:35:37.888 18:42:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:37.888 18:42:49 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:35:37.888 18:42:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:35:37.888 18:42:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:35:37.888 18:42:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:35:37.888 18:42:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:35:37.888 18:42:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:37.888 18:42:49 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:35:37.888 18:42:49 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:35:37.888 18:42:49 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:37.888 "params": { 00:35:37.888 "name": "Nvme0", 00:35:37.888 "trtype": "tcp", 00:35:37.888 "traddr": "10.0.0.2", 00:35:37.888 "adrfam": "ipv4", 00:35:37.888 "trsvcid": "4420", 00:35:37.888 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:37.888 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:37.888 "hdgst": false, 00:35:37.888 "ddgst": false 00:35:37.888 }, 00:35:37.888 "method": "bdev_nvme_attach_controller" 00:35:37.888 }' 00:35:37.888 18:42:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:35:37.888 18:42:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:35:37.888 18:42:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # break 00:35:37.888 18:42:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:35:37.888 18:42:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:38.146 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:38.146 fio-3.35 00:35:38.146 Starting 1 thread 00:35:50.403 00:35:50.403 filename0: (groupid=0, jobs=1): err= 0: pid=109418: Mon Jul 22 18:43:01 2024 00:35:50.403 read: IOPS=171, BW=687KiB/s (703kB/s)(6880KiB/10019msec) 00:35:50.403 slat (usec): min=8, max=122, avg=20.64, stdev=21.86 00:35:50.403 clat (usec): min=628, max=42951, avg=23222.92, stdev=20201.99 00:35:50.403 lat (usec): min=638, max=43020, avg=23243.56, stdev=20201.82 00:35:50.403 clat percentiles (usec): 00:35:50.403 | 1.00th=[ 652], 5.00th=[ 758], 10.00th=[ 791], 20.00th=[ 816], 00:35:50.403 | 30.00th=[ 889], 40.00th=[ 955], 50.00th=[41157], 60.00th=[41157], 00:35:50.403 | 70.00th=[41157], 80.00th=[41681], 90.00th=[41681], 95.00th=[41681], 00:35:50.403 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:35:50.403 | 99.99th=[42730] 00:35:50.403 bw ( KiB/s): min= 448, max= 2560, per=99.90%, avg=686.45, stdev=445.39, samples=20 00:35:50.403 iops : min= 112, max= 640, avg=171.60, stdev=111.35, samples=20 00:35:50.403 lat (usec) : 750=4.71%, 1000=38.90% 00:35:50.403 lat (msec) : 2=1.28%, 50=55.12% 00:35:50.403 cpu : usr=93.67%, sys=5.66%, ctx=22, majf=0, minf=1637 00:35:50.403 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:50.403 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:50.403 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:50.403 issued rwts: total=1720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:50.403 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:50.403 00:35:50.403 Run status group 0 (all jobs): 00:35:50.403 READ: bw=687KiB/s (703kB/s), 687KiB/s-687KiB/s (703kB/s-703kB/s), io=6880KiB (7045kB), run=10019-10019msec 00:35:50.403 ----------------------------------------------------- 00:35:50.403 Suppressions used: 00:35:50.403 count bytes template 00:35:50.403 1 8 /usr/src/fio/parse.c 00:35:50.403 1 8 libtcmalloc_minimal.so 00:35:50.403 1 904 libcrypto.so 00:35:50.403 ----------------------------------------------------- 00:35:50.403 00:35:50.403 18:43:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:35:50.403 18:43:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:35:50.403 18:43:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:35:50.403 18:43:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:50.403 18:43:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:35:50.403 18:43:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:50.403 18:43:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:50.403 18:43:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:50.403 18:43:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:50.403 18:43:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:50.403 18:43:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:50.403 18:43:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:50.403 18:43:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:50.403 ************************************ 00:35:50.403 END TEST fio_dif_1_default 00:35:50.403 ************************************ 00:35:50.403 00:35:50.403 real 0m12.605s 00:35:50.403 user 0m11.449s 00:35:50.403 sys 0m1.064s 00:35:50.403 18:43:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:50.403 18:43:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:50.662 18:43:02 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:35:50.662 18:43:02 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:35:50.662 18:43:02 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:35:50.662 18:43:02 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:50.662 18:43:02 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:50.662 ************************************ 00:35:50.662 START TEST fio_dif_1_multi_subsystems 00:35:50.662 ************************************ 00:35:50.662 18:43:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:35:50.662 18:43:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:35:50.662 18:43:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:35:50.662 18:43:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:35:50.662 18:43:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:35:50.662 18:43:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:35:50.662 18:43:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:35:50.662 18:43:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:50.662 18:43:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:50.662 18:43:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:50.662 bdev_null0 00:35:50.662 18:43:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:50.662 18:43:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:50.662 18:43:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:50.662 18:43:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:50.662 18:43:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:50.662 18:43:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:50.662 18:43:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:50.662 18:43:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:50.662 18:43:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:50.662 18:43:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:50.662 18:43:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:50.662 18:43:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:50.662 [2024-07-22 18:43:02.468742] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:50.662 18:43:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:50.662 18:43:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:35:50.662 18:43:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:35:50.662 18:43:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:35:50.662 18:43:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:35:50.662 18:43:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:50.662 18:43:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:50.662 bdev_null1 00:35:50.662 18:43:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:50.662 18:43:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:50.662 18:43:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:50.662 18:43:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:50.662 18:43:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:50.662 18:43:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:50.662 18:43:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:50.662 18:43:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:50.662 18:43:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:50.662 18:43:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:50.662 18:43:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:50.662 18:43:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:50.662 18:43:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:50.662 18:43:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:35:50.662 18:43:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:35:50.662 18:43:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:35:50.662 18:43:02 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:35:50.662 18:43:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:50.662 18:43:02 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:35:50.662 18:43:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:50.662 18:43:02 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:50.662 18:43:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:35:50.662 18:43:02 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:50.662 { 00:35:50.662 "params": { 00:35:50.662 "name": "Nvme$subsystem", 00:35:50.662 "trtype": "$TEST_TRANSPORT", 00:35:50.662 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:50.662 "adrfam": "ipv4", 00:35:50.662 "trsvcid": "$NVMF_PORT", 00:35:50.662 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:50.662 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:50.662 "hdgst": ${hdgst:-false}, 00:35:50.662 "ddgst": ${ddgst:-false} 00:35:50.662 }, 00:35:50.662 "method": "bdev_nvme_attach_controller" 00:35:50.662 } 00:35:50.662 EOF 00:35:50.662 )") 00:35:50.662 18:43:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:35:50.662 18:43:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:50.662 18:43:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:35:50.662 18:43:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:35:50.662 18:43:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:35:50.662 18:43:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:35:50.662 18:43:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:35:50.662 18:43:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:35:50.662 18:43:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:50.662 18:43:02 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:35:50.662 18:43:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:35:50.662 18:43:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:35:50.663 18:43:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:50.663 18:43:02 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:50.663 18:43:02 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:50.663 { 00:35:50.663 "params": { 00:35:50.663 "name": "Nvme$subsystem", 00:35:50.663 "trtype": "$TEST_TRANSPORT", 00:35:50.663 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:50.663 "adrfam": "ipv4", 00:35:50.663 "trsvcid": "$NVMF_PORT", 00:35:50.663 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:50.663 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:50.663 "hdgst": ${hdgst:-false}, 00:35:50.663 "ddgst": ${ddgst:-false} 00:35:50.663 }, 00:35:50.663 "method": "bdev_nvme_attach_controller" 00:35:50.663 } 00:35:50.663 EOF 00:35:50.663 )") 00:35:50.663 18:43:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:35:50.663 18:43:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:35:50.663 18:43:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:35:50.663 18:43:02 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:35:50.663 18:43:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:35:50.663 18:43:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:35:50.663 18:43:02 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:35:50.663 18:43:02 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:35:50.663 18:43:02 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:50.663 "params": { 00:35:50.663 "name": "Nvme0", 00:35:50.663 "trtype": "tcp", 00:35:50.663 "traddr": "10.0.0.2", 00:35:50.663 "adrfam": "ipv4", 00:35:50.663 "trsvcid": "4420", 00:35:50.663 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:50.663 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:50.663 "hdgst": false, 00:35:50.663 "ddgst": false 00:35:50.663 }, 00:35:50.663 "method": "bdev_nvme_attach_controller" 00:35:50.663 },{ 00:35:50.663 "params": { 00:35:50.663 "name": "Nvme1", 00:35:50.663 "trtype": "tcp", 00:35:50.663 "traddr": "10.0.0.2", 00:35:50.663 "adrfam": "ipv4", 00:35:50.663 "trsvcid": "4420", 00:35:50.663 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:50.663 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:50.663 "hdgst": false, 00:35:50.663 "ddgst": false 00:35:50.663 }, 00:35:50.663 "method": "bdev_nvme_attach_controller" 00:35:50.663 }' 00:35:50.663 18:43:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:35:50.663 18:43:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:35:50.663 18:43:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # break 00:35:50.663 18:43:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:35:50.663 18:43:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:50.921 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:50.921 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:50.921 fio-3.35 00:35:50.921 Starting 2 threads 00:36:03.113 00:36:03.113 filename0: (groupid=0, jobs=1): err= 0: pid=109581: Mon Jul 22 18:43:13 2024 00:36:03.113 read: IOPS=151, BW=607KiB/s (622kB/s)(6080KiB/10011msec) 00:36:03.113 slat (nsec): min=8625, max=96617, avg=19592.11, stdev=17165.90 00:36:03.113 clat (usec): min=599, max=42872, avg=26276.67, stdev=19601.47 00:36:03.113 lat (usec): min=609, max=42916, avg=26296.26, stdev=19601.41 00:36:03.113 clat percentiles (usec): 00:36:03.113 | 1.00th=[ 644], 5.00th=[ 668], 10.00th=[ 701], 20.00th=[ 840], 00:36:03.113 | 30.00th=[ 1156], 40.00th=[40633], 50.00th=[41157], 60.00th=[41157], 00:36:03.113 | 70.00th=[41157], 80.00th=[41681], 90.00th=[41681], 95.00th=[41681], 00:36:03.113 | 99.00th=[42206], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:36:03.113 | 99.99th=[42730] 00:36:03.113 bw ( KiB/s): min= 448, max= 928, per=55.76%, avg=606.30, stdev=115.83, samples=20 00:36:03.113 iops : min= 112, max= 232, avg=151.55, stdev=28.96, samples=20 00:36:03.113 lat (usec) : 750=13.82%, 1000=12.30% 00:36:03.113 lat (msec) : 2=10.99%, 10=0.26%, 50=62.63% 00:36:03.113 cpu : usr=95.59%, sys=3.80%, ctx=15, majf=0, minf=1637 00:36:03.113 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:03.113 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:03.113 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:03.113 issued rwts: total=1520,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:03.113 latency : target=0, window=0, percentile=100.00%, depth=4 00:36:03.113 filename1: (groupid=0, jobs=1): err= 0: pid=109582: Mon Jul 22 18:43:13 2024 00:36:03.113 read: IOPS=119, BW=480KiB/s (491kB/s)(4800KiB/10010msec) 00:36:03.113 slat (usec): min=6, max=137, avg=25.17, stdev=24.02 00:36:03.113 clat (usec): min=608, max=42847, avg=33274.70, stdev=16482.24 00:36:03.113 lat (usec): min=619, max=42915, avg=33299.87, stdev=16483.32 00:36:03.113 clat percentiles (usec): 00:36:03.113 | 1.00th=[ 652], 5.00th=[ 717], 10.00th=[ 857], 20.00th=[ 1352], 00:36:03.113 | 30.00th=[41157], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], 00:36:03.113 | 70.00th=[41681], 80.00th=[41681], 90.00th=[41681], 95.00th=[42206], 00:36:03.113 | 99.00th=[42206], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:36:03.113 | 99.99th=[42730] 00:36:03.113 bw ( KiB/s): min= 384, max= 672, per=43.98%, avg=478.40, stdev=79.39, samples=20 00:36:03.113 iops : min= 96, max= 168, avg=119.60, stdev=19.85, samples=20 00:36:03.113 lat (usec) : 750=6.00%, 1000=5.33% 00:36:03.113 lat (msec) : 2=9.00%, 4=0.33%, 50=79.33% 00:36:03.113 cpu : usr=95.70%, sys=3.59%, ctx=14, majf=0, minf=1637 00:36:03.113 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:03.113 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:03.113 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:03.113 issued rwts: total=1200,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:03.113 latency : target=0, window=0, percentile=100.00%, depth=4 00:36:03.113 00:36:03.113 Run status group 0 (all jobs): 00:36:03.113 READ: bw=1087KiB/s (1113kB/s), 480KiB/s-607KiB/s (491kB/s-622kB/s), io=10.6MiB (11.1MB), run=10010-10011msec 00:36:03.372 ----------------------------------------------------- 00:36:03.372 Suppressions used: 00:36:03.372 count bytes template 00:36:03.372 2 16 /usr/src/fio/parse.c 00:36:03.372 1 8 libtcmalloc_minimal.so 00:36:03.372 1 904 libcrypto.so 00:36:03.372 ----------------------------------------------------- 00:36:03.372 00:36:03.372 18:43:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:36:03.372 18:43:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:36:03.372 18:43:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:36:03.372 18:43:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:03.372 18:43:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:36:03.372 18:43:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:03.372 18:43:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:03.372 18:43:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:03.372 18:43:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:03.372 18:43:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:03.372 18:43:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:03.372 18:43:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:03.372 18:43:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:03.372 18:43:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:36:03.372 18:43:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:36:03.372 18:43:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:36:03.372 18:43:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:03.372 18:43:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:03.372 18:43:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:03.372 18:43:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:03.372 18:43:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:36:03.372 18:43:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:03.372 18:43:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:03.372 18:43:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:03.372 ************************************ 00:36:03.372 END TEST fio_dif_1_multi_subsystems 00:36:03.372 ************************************ 00:36:03.372 00:36:03.372 real 0m12.880s 00:36:03.372 user 0m21.507s 00:36:03.372 sys 0m1.218s 00:36:03.372 18:43:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:03.372 18:43:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:03.372 18:43:15 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:36:03.372 18:43:15 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:36:03.372 18:43:15 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:36:03.372 18:43:15 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:03.372 18:43:15 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:03.372 ************************************ 00:36:03.372 START TEST fio_dif_rand_params 00:36:03.372 ************************************ 00:36:03.372 18:43:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:36:03.372 18:43:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:36:03.372 18:43:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:36:03.373 18:43:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:36:03.373 18:43:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:36:03.373 18:43:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:36:03.373 18:43:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:36:03.373 18:43:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:36:03.373 18:43:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:36:03.373 18:43:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:36:03.373 18:43:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:03.373 18:43:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:36:03.373 18:43:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:36:03.373 18:43:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:36:03.373 18:43:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:03.373 18:43:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:03.373 bdev_null0 00:36:03.373 18:43:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:03.373 18:43:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:03.373 18:43:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:03.373 18:43:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:03.631 18:43:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:03.631 18:43:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:03.631 18:43:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:03.631 18:43:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:03.631 18:43:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:03.631 18:43:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:03.631 18:43:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:03.631 18:43:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:03.631 [2024-07-22 18:43:15.405009] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:03.631 18:43:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:03.631 18:43:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:36:03.631 18:43:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:36:03.631 18:43:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:36:03.631 18:43:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:36:03.631 18:43:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:36:03.631 18:43:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:03.631 18:43:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:03.631 { 00:36:03.631 "params": { 00:36:03.631 "name": "Nvme$subsystem", 00:36:03.631 "trtype": "$TEST_TRANSPORT", 00:36:03.631 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:03.631 "adrfam": "ipv4", 00:36:03.631 "trsvcid": "$NVMF_PORT", 00:36:03.631 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:03.631 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:03.631 "hdgst": ${hdgst:-false}, 00:36:03.631 "ddgst": ${ddgst:-false} 00:36:03.631 }, 00:36:03.631 "method": "bdev_nvme_attach_controller" 00:36:03.631 } 00:36:03.631 EOF 00:36:03.631 )") 00:36:03.631 18:43:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:03.631 18:43:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:36:03.631 18:43:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:03.631 18:43:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:36:03.631 18:43:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:36:03.631 18:43:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:36:03.631 18:43:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:03.631 18:43:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:36:03.631 18:43:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:36:03.631 18:43:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:36:03.631 18:43:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:36:03.631 18:43:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:36:03.631 18:43:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:36:03.632 18:43:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:36:03.632 18:43:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:36:03.632 18:43:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:36:03.632 18:43:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:36:03.632 18:43:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:03.632 18:43:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:36:03.632 18:43:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:36:03.632 18:43:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:36:03.632 "params": { 00:36:03.632 "name": "Nvme0", 00:36:03.632 "trtype": "tcp", 00:36:03.632 "traddr": "10.0.0.2", 00:36:03.632 "adrfam": "ipv4", 00:36:03.632 "trsvcid": "4420", 00:36:03.632 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:03.632 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:03.632 "hdgst": false, 00:36:03.632 "ddgst": false 00:36:03.632 }, 00:36:03.632 "method": "bdev_nvme_attach_controller" 00:36:03.632 }' 00:36:03.632 18:43:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:36:03.632 18:43:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:36:03.632 18:43:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # break 00:36:03.632 18:43:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:36:03.632 18:43:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:03.632 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:36:03.632 ... 00:36:03.632 fio-3.35 00:36:03.632 Starting 3 threads 00:36:10.190 00:36:10.190 filename0: (groupid=0, jobs=1): err= 0: pid=109734: Mon Jul 22 18:43:21 2024 00:36:10.190 read: IOPS=178, BW=22.3MiB/s (23.4MB/s)(112MiB/5001msec) 00:36:10.190 slat (usec): min=9, max=111, avg=29.55, stdev=19.67 00:36:10.190 clat (usec): min=5385, max=57165, avg=16724.29, stdev=12156.52 00:36:10.190 lat (usec): min=5396, max=57229, avg=16753.84, stdev=12156.63 00:36:10.190 clat percentiles (usec): 00:36:10.190 | 1.00th=[ 5604], 5.00th=[ 9765], 10.00th=[10159], 20.00th=[11600], 00:36:10.190 | 30.00th=[12256], 40.00th=[13042], 50.00th=[13566], 60.00th=[13829], 00:36:10.190 | 70.00th=[14091], 80.00th=[14615], 90.00th=[20841], 95.00th=[54264], 00:36:10.190 | 99.00th=[55837], 99.50th=[56361], 99.90th=[57410], 99.95th=[57410], 00:36:10.190 | 99.99th=[57410] 00:36:10.190 bw ( KiB/s): min=19200, max=27648, per=31.46%, avg=22869.33, stdev=2654.26, samples=9 00:36:10.190 iops : min= 150, max= 216, avg=178.67, stdev=20.74, samples=9 00:36:10.190 lat (msec) : 10=8.17%, 20=81.77%, 50=1.12%, 100=8.95% 00:36:10.190 cpu : usr=94.22%, sys=4.40%, ctx=46, majf=0, minf=1637 00:36:10.190 IO depths : 1=8.6%, 2=91.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:10.190 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:10.190 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:10.190 issued rwts: total=894,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:10.190 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:10.190 filename0: (groupid=0, jobs=1): err= 0: pid=109735: Mon Jul 22 18:43:21 2024 00:36:10.190 read: IOPS=208, BW=26.1MiB/s (27.4MB/s)(131MiB/5001msec) 00:36:10.190 slat (nsec): min=6364, max=96010, avg=21004.78, stdev=14804.01 00:36:10.190 clat (usec): min=5347, max=54104, avg=14314.37, stdev=4823.48 00:36:10.190 lat (usec): min=5363, max=54114, avg=14335.38, stdev=4823.44 00:36:10.190 clat percentiles (usec): 00:36:10.190 | 1.00th=[ 5473], 5.00th=[ 5735], 10.00th=[10159], 20.00th=[10683], 00:36:10.190 | 30.00th=[11338], 40.00th=[12780], 50.00th=[15270], 60.00th=[16319], 00:36:10.190 | 70.00th=[16909], 80.00th=[17433], 90.00th=[17957], 95.00th=[18744], 00:36:10.190 | 99.00th=[27132], 99.50th=[50594], 99.90th=[52691], 99.95th=[54264], 00:36:10.190 | 99.99th=[54264] 00:36:10.190 bw ( KiB/s): min=24576, max=27648, per=36.39%, avg=26453.33, stdev=1093.63, samples=9 00:36:10.190 iops : min= 192, max= 216, avg=206.67, stdev= 8.54, samples=9 00:36:10.190 lat (msec) : 10=9.00%, 20=88.41%, 50=2.01%, 100=0.57% 00:36:10.190 cpu : usr=92.28%, sys=5.98%, ctx=5, majf=0, minf=1635 00:36:10.190 IO depths : 1=30.3%, 2=69.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:10.190 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:10.190 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:10.190 issued rwts: total=1044,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:10.190 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:10.190 filename0: (groupid=0, jobs=1): err= 0: pid=109736: Mon Jul 22 18:43:21 2024 00:36:10.190 read: IOPS=182, BW=22.8MiB/s (23.9MB/s)(115MiB/5027msec) 00:36:10.190 slat (usec): min=9, max=108, avg=27.41, stdev=12.48 00:36:10.190 clat (usec): min=5599, max=59115, avg=16407.03, stdev=9989.16 00:36:10.190 lat (usec): min=5618, max=59162, avg=16434.44, stdev=9989.26 00:36:10.190 clat percentiles (usec): 00:36:10.190 | 1.00th=[ 5669], 5.00th=[ 8848], 10.00th=[ 9765], 20.00th=[10290], 00:36:10.190 | 30.00th=[12125], 40.00th=[14746], 50.00th=[15401], 60.00th=[15926], 00:36:10.190 | 70.00th=[16319], 80.00th=[16909], 90.00th=[17957], 95.00th=[49546], 00:36:10.190 | 99.00th=[56886], 99.50th=[58983], 99.90th=[58983], 99.95th=[58983], 00:36:10.190 | 99.99th=[58983] 00:36:10.190 bw ( KiB/s): min=17664, max=27904, per=32.19%, avg=23402.80, stdev=3187.08, samples=10 00:36:10.190 iops : min= 138, max= 218, avg=182.80, stdev=24.91, samples=10 00:36:10.190 lat (msec) : 10=13.63%, 20=79.61%, 50=1.96%, 100=4.80% 00:36:10.190 cpu : usr=93.43%, sys=4.81%, ctx=7, majf=0, minf=1637 00:36:10.190 IO depths : 1=2.7%, 2=97.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:10.190 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:10.190 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:10.190 issued rwts: total=917,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:10.190 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:10.190 00:36:10.190 Run status group 0 (all jobs): 00:36:10.190 READ: bw=71.0MiB/s (74.4MB/s), 22.3MiB/s-26.1MiB/s (23.4MB/s-27.4MB/s), io=357MiB (374MB), run=5001-5027msec 00:36:11.131 ----------------------------------------------------- 00:36:11.131 Suppressions used: 00:36:11.131 count bytes template 00:36:11.131 5 44 /usr/src/fio/parse.c 00:36:11.131 1 8 libtcmalloc_minimal.so 00:36:11.131 1 904 libcrypto.so 00:36:11.131 ----------------------------------------------------- 00:36:11.131 00:36:11.131 18:43:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:36:11.131 18:43:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:36:11.131 18:43:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:11.131 18:43:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:11.131 18:43:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:36:11.131 18:43:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:11.131 18:43:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:11.131 18:43:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:11.131 18:43:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:11.131 18:43:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:11.131 18:43:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:11.131 18:43:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:11.131 18:43:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:11.131 18:43:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:36:11.131 18:43:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:36:11.131 18:43:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:36:11.131 18:43:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:36:11.131 18:43:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:36:11.131 18:43:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:36:11.131 18:43:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:36:11.131 18:43:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:36:11.131 18:43:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:11.131 18:43:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:36:11.131 18:43:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:36:11.131 18:43:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:36:11.131 18:43:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:11.131 18:43:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:11.131 bdev_null0 00:36:11.131 18:43:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:11.131 18:43:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:11.131 18:43:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:11.131 18:43:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:11.131 18:43:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:11.131 18:43:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:11.131 18:43:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:11.131 18:43:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:11.131 18:43:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:11.131 18:43:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:11.131 18:43:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:11.131 18:43:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:11.131 [2024-07-22 18:43:22.979063] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:11.131 18:43:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:11.131 18:43:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:11.131 18:43:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:36:11.131 18:43:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:36:11.131 18:43:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:36:11.131 18:43:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:11.131 18:43:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:11.131 bdev_null1 00:36:11.131 18:43:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:11.131 18:43:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:36:11.131 18:43:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:11.131 18:43:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:11.131 18:43:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:11.131 18:43:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:36:11.131 18:43:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:11.131 18:43:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:11.131 18:43:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:11.131 18:43:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:11.131 18:43:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:11.131 18:43:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:11.131 18:43:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:11.131 18:43:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:11.131 18:43:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:36:11.131 18:43:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:36:11.131 18:43:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:36:11.131 18:43:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:11.131 18:43:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:11.131 bdev_null2 00:36:11.131 18:43:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:11.131 18:43:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:36:11.131 18:43:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:11.131 18:43:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:11.131 18:43:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:11.131 18:43:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:36:11.131 18:43:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:11.131 18:43:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:11.131 18:43:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:11.131 18:43:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:36:11.131 18:43:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:11.131 18:43:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:11.131 18:43:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:11.131 18:43:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:36:11.131 18:43:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:11.131 18:43:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:11.131 18:43:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:36:11.131 18:43:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:36:11.131 18:43:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:36:11.131 18:43:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:11.131 18:43:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:36:11.131 18:43:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:36:11.131 18:43:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:36:11.131 18:43:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:36:11.131 18:43:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:36:11.131 18:43:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:36:11.131 18:43:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:36:11.131 18:43:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:36:11.131 18:43:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:36:11.131 18:43:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:36:11.131 18:43:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:11.131 18:43:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:11.131 { 00:36:11.131 "params": { 00:36:11.131 "name": "Nvme$subsystem", 00:36:11.131 "trtype": "$TEST_TRANSPORT", 00:36:11.131 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:11.131 "adrfam": "ipv4", 00:36:11.131 "trsvcid": "$NVMF_PORT", 00:36:11.131 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:11.131 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:11.131 "hdgst": ${hdgst:-false}, 00:36:11.131 "ddgst": ${ddgst:-false} 00:36:11.131 }, 00:36:11.131 "method": "bdev_nvme_attach_controller" 00:36:11.131 } 00:36:11.131 EOF 00:36:11.131 )") 00:36:11.131 18:43:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:36:11.131 18:43:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:36:11.131 18:43:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:36:11.131 18:43:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:36:11.131 18:43:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:11.131 18:43:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:36:11.131 18:43:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:36:11.131 18:43:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:11.131 18:43:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:11.131 { 00:36:11.131 "params": { 00:36:11.131 "name": "Nvme$subsystem", 00:36:11.131 "trtype": "$TEST_TRANSPORT", 00:36:11.131 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:11.131 "adrfam": "ipv4", 00:36:11.131 "trsvcid": "$NVMF_PORT", 00:36:11.131 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:11.131 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:11.131 "hdgst": ${hdgst:-false}, 00:36:11.131 "ddgst": ${ddgst:-false} 00:36:11.131 }, 00:36:11.131 "method": "bdev_nvme_attach_controller" 00:36:11.131 } 00:36:11.131 EOF 00:36:11.131 )") 00:36:11.131 18:43:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:36:11.131 18:43:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:11.131 18:43:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:36:11.131 18:43:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:36:11.131 18:43:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:11.131 18:43:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:11.131 { 00:36:11.131 "params": { 00:36:11.131 "name": "Nvme$subsystem", 00:36:11.131 "trtype": "$TEST_TRANSPORT", 00:36:11.131 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:11.131 "adrfam": "ipv4", 00:36:11.131 "trsvcid": "$NVMF_PORT", 00:36:11.132 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:11.132 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:11.132 "hdgst": ${hdgst:-false}, 00:36:11.132 "ddgst": ${ddgst:-false} 00:36:11.132 }, 00:36:11.132 "method": "bdev_nvme_attach_controller" 00:36:11.132 } 00:36:11.132 EOF 00:36:11.132 )") 00:36:11.132 18:43:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:36:11.132 18:43:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:36:11.132 18:43:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:11.132 18:43:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:36:11.132 18:43:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:36:11.132 18:43:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:36:11.132 "params": { 00:36:11.132 "name": "Nvme0", 00:36:11.132 "trtype": "tcp", 00:36:11.132 "traddr": "10.0.0.2", 00:36:11.132 "adrfam": "ipv4", 00:36:11.132 "trsvcid": "4420", 00:36:11.132 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:11.132 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:11.132 "hdgst": false, 00:36:11.132 "ddgst": false 00:36:11.132 }, 00:36:11.132 "method": "bdev_nvme_attach_controller" 00:36:11.132 },{ 00:36:11.132 "params": { 00:36:11.132 "name": "Nvme1", 00:36:11.132 "trtype": "tcp", 00:36:11.132 "traddr": "10.0.0.2", 00:36:11.132 "adrfam": "ipv4", 00:36:11.132 "trsvcid": "4420", 00:36:11.132 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:11.132 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:11.132 "hdgst": false, 00:36:11.132 "ddgst": false 00:36:11.132 }, 00:36:11.132 "method": "bdev_nvme_attach_controller" 00:36:11.132 },{ 00:36:11.132 "params": { 00:36:11.132 "name": "Nvme2", 00:36:11.132 "trtype": "tcp", 00:36:11.132 "traddr": "10.0.0.2", 00:36:11.132 "adrfam": "ipv4", 00:36:11.132 "trsvcid": "4420", 00:36:11.132 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:36:11.132 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:36:11.132 "hdgst": false, 00:36:11.132 "ddgst": false 00:36:11.132 }, 00:36:11.132 "method": "bdev_nvme_attach_controller" 00:36:11.132 }' 00:36:11.132 18:43:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:36:11.132 18:43:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:36:11.132 18:43:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # break 00:36:11.132 18:43:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:36:11.132 18:43:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:11.440 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:36:11.440 ... 00:36:11.440 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:36:11.440 ... 00:36:11.440 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:36:11.440 ... 00:36:11.440 fio-3.35 00:36:11.440 Starting 24 threads 00:36:23.699 00:36:23.699 filename0: (groupid=0, jobs=1): err= 0: pid=109840: Mon Jul 22 18:43:34 2024 00:36:23.699 read: IOPS=168, BW=675KiB/s (692kB/s)(6780KiB/10040msec) 00:36:23.699 slat (usec): min=4, max=8036, avg=29.87, stdev=327.83 00:36:23.699 clat (msec): min=14, max=191, avg=94.48, stdev=32.10 00:36:23.699 lat (msec): min=14, max=191, avg=94.51, stdev=32.09 00:36:23.699 clat percentiles (msec): 00:36:23.699 | 1.00th=[ 18], 5.00th=[ 55], 10.00th=[ 61], 20.00th=[ 69], 00:36:23.699 | 30.00th=[ 72], 40.00th=[ 84], 50.00th=[ 94], 60.00th=[ 97], 00:36:23.699 | 70.00th=[ 108], 80.00th=[ 123], 90.00th=[ 136], 95.00th=[ 157], 00:36:23.699 | 99.00th=[ 169], 99.50th=[ 192], 99.90th=[ 192], 99.95th=[ 192], 00:36:23.699 | 99.99th=[ 192] 00:36:23.699 bw ( KiB/s): min= 384, max= 1002, per=4.43%, avg=671.60, stdev=158.65, samples=20 00:36:23.699 iops : min= 96, max= 250, avg=167.85, stdev=39.59, samples=20 00:36:23.699 lat (msec) : 20=2.77%, 50=0.47%, 100=59.47%, 250=37.29% 00:36:23.699 cpu : usr=34.32%, sys=0.66%, ctx=915, majf=0, minf=1635 00:36:23.699 IO depths : 1=2.2%, 2=4.8%, 4=13.7%, 8=68.4%, 16=10.9%, 32=0.0%, >=64=0.0% 00:36:23.699 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:23.699 complete : 0=0.0%, 4=91.0%, 8=3.9%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:23.699 issued rwts: total=1695,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:23.700 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:23.700 filename0: (groupid=0, jobs=1): err= 0: pid=109841: Mon Jul 22 18:43:34 2024 00:36:23.700 read: IOPS=153, BW=616KiB/s (630kB/s)(6188KiB/10050msec) 00:36:23.700 slat (usec): min=8, max=8044, avg=25.63, stdev=288.40 00:36:23.700 clat (msec): min=47, max=177, avg=103.59, stdev=27.38 00:36:23.700 lat (msec): min=47, max=177, avg=103.61, stdev=27.40 00:36:23.700 clat percentiles (msec): 00:36:23.700 | 1.00th=[ 48], 5.00th=[ 63], 10.00th=[ 72], 20.00th=[ 79], 00:36:23.700 | 30.00th=[ 86], 40.00th=[ 96], 50.00th=[ 99], 60.00th=[ 108], 00:36:23.700 | 70.00th=[ 121], 80.00th=[ 132], 90.00th=[ 142], 95.00th=[ 155], 00:36:23.700 | 99.00th=[ 169], 99.50th=[ 178], 99.90th=[ 178], 99.95th=[ 178], 00:36:23.700 | 99.99th=[ 178] 00:36:23.700 bw ( KiB/s): min= 472, max= 768, per=4.03%, avg=612.00, stdev=96.44, samples=20 00:36:23.700 iops : min= 118, max= 192, avg=152.90, stdev=24.10, samples=20 00:36:23.700 lat (msec) : 50=1.03%, 100=51.13%, 250=47.83% 00:36:23.700 cpu : usr=32.60%, sys=0.82%, ctx=857, majf=0, minf=1634 00:36:23.700 IO depths : 1=1.8%, 2=3.9%, 4=12.7%, 8=70.2%, 16=11.3%, 32=0.0%, >=64=0.0% 00:36:23.700 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:23.700 complete : 0=0.0%, 4=90.5%, 8=4.5%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:23.700 issued rwts: total=1547,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:23.700 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:23.700 filename0: (groupid=0, jobs=1): err= 0: pid=109842: Mon Jul 22 18:43:34 2024 00:36:23.700 read: IOPS=148, BW=595KiB/s (609kB/s)(5964KiB/10031msec) 00:36:23.700 slat (usec): min=4, max=100, avg=15.68, stdev= 7.33 00:36:23.700 clat (msec): min=36, max=217, avg=107.47, stdev=25.00 00:36:23.700 lat (msec): min=36, max=217, avg=107.49, stdev=25.00 00:36:23.700 clat percentiles (msec): 00:36:23.700 | 1.00th=[ 59], 5.00th=[ 69], 10.00th=[ 73], 20.00th=[ 92], 00:36:23.700 | 30.00th=[ 97], 40.00th=[ 101], 50.00th=[ 105], 60.00th=[ 110], 00:36:23.700 | 70.00th=[ 116], 80.00th=[ 126], 90.00th=[ 144], 95.00th=[ 155], 00:36:23.700 | 99.00th=[ 165], 99.50th=[ 184], 99.90th=[ 218], 99.95th=[ 218], 00:36:23.700 | 99.99th=[ 218] 00:36:23.700 bw ( KiB/s): min= 512, max= 768, per=3.92%, avg=594.16, stdev=72.76, samples=19 00:36:23.700 iops : min= 128, max= 192, avg=148.53, stdev=18.18, samples=19 00:36:23.700 lat (msec) : 50=0.34%, 100=40.64%, 250=59.02% 00:36:23.700 cpu : usr=45.30%, sys=1.10%, ctx=1474, majf=0, minf=1636 00:36:23.700 IO depths : 1=3.6%, 2=8.0%, 4=19.3%, 8=59.8%, 16=9.2%, 32=0.0%, >=64=0.0% 00:36:23.700 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:23.700 complete : 0=0.0%, 4=92.6%, 8=1.9%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:23.700 issued rwts: total=1491,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:23.700 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:23.700 filename0: (groupid=0, jobs=1): err= 0: pid=109843: Mon Jul 22 18:43:34 2024 00:36:23.700 read: IOPS=157, BW=631KiB/s (646kB/s)(6352KiB/10062msec) 00:36:23.700 slat (usec): min=6, max=8046, avg=20.57, stdev=201.68 00:36:23.700 clat (msec): min=44, max=193, avg=101.22, stdev=29.75 00:36:23.700 lat (msec): min=44, max=193, avg=101.24, stdev=29.75 00:36:23.700 clat percentiles (msec): 00:36:23.700 | 1.00th=[ 45], 5.00th=[ 59], 10.00th=[ 64], 20.00th=[ 73], 00:36:23.700 | 30.00th=[ 85], 40.00th=[ 95], 50.00th=[ 96], 60.00th=[ 108], 00:36:23.700 | 70.00th=[ 114], 80.00th=[ 121], 90.00th=[ 144], 95.00th=[ 157], 00:36:23.700 | 99.00th=[ 194], 99.50th=[ 194], 99.90th=[ 194], 99.95th=[ 194], 00:36:23.700 | 99.99th=[ 194] 00:36:23.700 bw ( KiB/s): min= 456, max= 816, per=4.14%, avg=627.45, stdev=105.01, samples=20 00:36:23.700 iops : min= 114, max= 204, avg=156.80, stdev=26.25, samples=20 00:36:23.700 lat (msec) : 50=1.01%, 100=53.97%, 250=45.03% 00:36:23.700 cpu : usr=32.47%, sys=0.79%, ctx=895, majf=0, minf=1634 00:36:23.700 IO depths : 1=1.4%, 2=3.5%, 4=12.0%, 8=71.3%, 16=11.8%, 32=0.0%, >=64=0.0% 00:36:23.700 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:23.700 complete : 0=0.0%, 4=90.7%, 8=4.3%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:23.700 issued rwts: total=1588,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:23.700 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:23.700 filename0: (groupid=0, jobs=1): err= 0: pid=109844: Mon Jul 22 18:43:34 2024 00:36:23.700 read: IOPS=165, BW=661KiB/s (677kB/s)(6668KiB/10084msec) 00:36:23.700 slat (usec): min=4, max=8036, avg=27.06, stdev=294.64 00:36:23.700 clat (msec): min=22, max=213, avg=96.37, stdev=28.31 00:36:23.700 lat (msec): min=22, max=213, avg=96.40, stdev=28.31 00:36:23.700 clat percentiles (msec): 00:36:23.700 | 1.00th=[ 41], 5.00th=[ 59], 10.00th=[ 62], 20.00th=[ 72], 00:36:23.700 | 30.00th=[ 79], 40.00th=[ 89], 50.00th=[ 96], 60.00th=[ 100], 00:36:23.700 | 70.00th=[ 108], 80.00th=[ 120], 90.00th=[ 133], 95.00th=[ 150], 00:36:23.700 | 99.00th=[ 167], 99.50th=[ 169], 99.90th=[ 180], 99.95th=[ 213], 00:36:23.700 | 99.99th=[ 213] 00:36:23.700 bw ( KiB/s): min= 472, max= 896, per=4.35%, avg=659.95, stdev=123.24, samples=20 00:36:23.700 iops : min= 118, max= 224, avg=164.90, stdev=30.80, samples=20 00:36:23.700 lat (msec) : 50=3.36%, 100=56.99%, 250=39.65% 00:36:23.700 cpu : usr=36.73%, sys=0.92%, ctx=1031, majf=0, minf=1637 00:36:23.700 IO depths : 1=1.7%, 2=4.1%, 4=12.8%, 8=69.5%, 16=11.8%, 32=0.0%, >=64=0.0% 00:36:23.700 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:23.700 complete : 0=0.0%, 4=90.8%, 8=4.6%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:23.700 issued rwts: total=1667,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:23.700 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:23.700 filename0: (groupid=0, jobs=1): err= 0: pid=109845: Mon Jul 22 18:43:34 2024 00:36:23.700 read: IOPS=174, BW=697KiB/s (714kB/s)(7012KiB/10058msec) 00:36:23.700 slat (usec): min=5, max=8035, avg=21.53, stdev=191.83 00:36:23.700 clat (msec): min=41, max=170, avg=91.67, stdev=24.94 00:36:23.700 lat (msec): min=41, max=170, avg=91.69, stdev=24.95 00:36:23.700 clat percentiles (msec): 00:36:23.700 | 1.00th=[ 46], 5.00th=[ 58], 10.00th=[ 61], 20.00th=[ 70], 00:36:23.700 | 30.00th=[ 74], 40.00th=[ 83], 50.00th=[ 91], 60.00th=[ 96], 00:36:23.700 | 70.00th=[ 105], 80.00th=[ 111], 90.00th=[ 126], 95.00th=[ 136], 00:36:23.700 | 99.00th=[ 165], 99.50th=[ 169], 99.90th=[ 171], 99.95th=[ 171], 00:36:23.700 | 99.99th=[ 171] 00:36:23.700 bw ( KiB/s): min= 498, max= 864, per=4.57%, avg=693.10, stdev=100.54, samples=20 00:36:23.700 iops : min= 124, max= 216, avg=173.20, stdev=25.17, samples=20 00:36:23.700 lat (msec) : 50=1.94%, 100=63.83%, 250=34.23% 00:36:23.700 cpu : usr=38.50%, sys=0.86%, ctx=1165, majf=0, minf=1636 00:36:23.700 IO depths : 1=1.0%, 2=2.3%, 4=10.6%, 8=73.8%, 16=12.3%, 32=0.0%, >=64=0.0% 00:36:23.700 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:23.700 complete : 0=0.0%, 4=90.2%, 8=5.0%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:23.700 issued rwts: total=1753,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:23.700 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:23.700 filename0: (groupid=0, jobs=1): err= 0: pid=109846: Mon Jul 22 18:43:34 2024 00:36:23.700 read: IOPS=181, BW=726KiB/s (744kB/s)(7348KiB/10120msec) 00:36:23.700 slat (usec): min=5, max=8050, avg=23.65, stdev=264.97 00:36:23.700 clat (msec): min=10, max=199, avg=87.83, stdev=29.88 00:36:23.700 lat (msec): min=10, max=199, avg=87.85, stdev=29.87 00:36:23.700 clat percentiles (msec): 00:36:23.700 | 1.00th=[ 14], 5.00th=[ 32], 10.00th=[ 58], 20.00th=[ 63], 00:36:23.700 | 30.00th=[ 72], 40.00th=[ 81], 50.00th=[ 85], 60.00th=[ 96], 00:36:23.700 | 70.00th=[ 106], 80.00th=[ 111], 90.00th=[ 121], 95.00th=[ 133], 00:36:23.700 | 99.00th=[ 159], 99.50th=[ 167], 99.90th=[ 199], 99.95th=[ 199], 00:36:23.700 | 99.99th=[ 199] 00:36:23.700 bw ( KiB/s): min= 508, max= 1208, per=4.80%, avg=728.00, stdev=131.88, samples=20 00:36:23.700 iops : min= 127, max= 302, avg=181.95, stdev=32.97, samples=20 00:36:23.700 lat (msec) : 20=2.99%, 50=4.30%, 100=60.91%, 250=31.79% 00:36:23.700 cpu : usr=33.53%, sys=0.64%, ctx=901, majf=0, minf=1637 00:36:23.700 IO depths : 1=1.1%, 2=2.4%, 4=8.3%, 8=75.4%, 16=12.8%, 32=0.0%, >=64=0.0% 00:36:23.700 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:23.700 complete : 0=0.0%, 4=89.8%, 8=6.0%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:23.700 issued rwts: total=1837,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:23.700 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:23.700 filename0: (groupid=0, jobs=1): err= 0: pid=109847: Mon Jul 22 18:43:34 2024 00:36:23.700 read: IOPS=165, BW=662KiB/s (677kB/s)(6680KiB/10097msec) 00:36:23.700 slat (usec): min=4, max=1059, avg=16.95, stdev=36.65 00:36:23.700 clat (msec): min=10, max=188, avg=96.60, stdev=29.22 00:36:23.700 lat (msec): min=10, max=188, avg=96.61, stdev=29.22 00:36:23.700 clat percentiles (msec): 00:36:23.700 | 1.00th=[ 21], 5.00th=[ 54], 10.00th=[ 65], 20.00th=[ 73], 00:36:23.700 | 30.00th=[ 86], 40.00th=[ 93], 50.00th=[ 97], 60.00th=[ 104], 00:36:23.700 | 70.00th=[ 109], 80.00th=[ 117], 90.00th=[ 131], 95.00th=[ 148], 00:36:23.700 | 99.00th=[ 169], 99.50th=[ 171], 99.90th=[ 188], 99.95th=[ 188], 00:36:23.700 | 99.99th=[ 188] 00:36:23.700 bw ( KiB/s): min= 440, max= 1024, per=4.34%, avg=658.15, stdev=133.87, samples=20 00:36:23.700 iops : min= 110, max= 256, avg=164.45, stdev=33.49, samples=20 00:36:23.700 lat (msec) : 20=0.96%, 50=3.29%, 100=52.46%, 250=43.29% 00:36:23.700 cpu : usr=43.57%, sys=0.99%, ctx=1897, majf=0, minf=1635 00:36:23.700 IO depths : 1=2.3%, 2=5.1%, 4=14.0%, 8=68.1%, 16=10.5%, 32=0.0%, >=64=0.0% 00:36:23.700 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:23.700 complete : 0=0.0%, 4=91.3%, 8=3.5%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:23.700 issued rwts: total=1670,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:23.700 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:23.700 filename1: (groupid=0, jobs=1): err= 0: pid=109848: Mon Jul 22 18:43:34 2024 00:36:23.700 read: IOPS=143, BW=575KiB/s (588kB/s)(5748KiB/10002msec) 00:36:23.700 slat (usec): min=4, max=8032, avg=25.64, stdev=279.31 00:36:23.700 clat (msec): min=14, max=215, avg=111.13, stdev=30.42 00:36:23.700 lat (msec): min=14, max=215, avg=111.15, stdev=30.42 00:36:23.700 clat percentiles (msec): 00:36:23.700 | 1.00th=[ 15], 5.00th=[ 71], 10.00th=[ 72], 20.00th=[ 88], 00:36:23.700 | 30.00th=[ 96], 40.00th=[ 104], 50.00th=[ 108], 60.00th=[ 120], 00:36:23.700 | 70.00th=[ 121], 80.00th=[ 132], 90.00th=[ 153], 95.00th=[ 167], 00:36:23.700 | 99.00th=[ 190], 99.50th=[ 205], 99.90th=[ 215], 99.95th=[ 215], 00:36:23.701 | 99.99th=[ 215] 00:36:23.701 bw ( KiB/s): min= 384, max= 816, per=3.72%, avg=564.68, stdev=110.90, samples=19 00:36:23.701 iops : min= 96, max= 204, avg=141.16, stdev=27.72, samples=19 00:36:23.701 lat (msec) : 20=1.11%, 100=38.62%, 250=60.26% 00:36:23.701 cpu : usr=32.72%, sys=0.76%, ctx=886, majf=0, minf=1636 00:36:23.701 IO depths : 1=3.0%, 2=7.0%, 4=18.2%, 8=62.2%, 16=9.7%, 32=0.0%, >=64=0.0% 00:36:23.701 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:23.701 complete : 0=0.0%, 4=92.2%, 8=2.2%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:23.701 issued rwts: total=1437,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:23.701 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:23.701 filename1: (groupid=0, jobs=1): err= 0: pid=109849: Mon Jul 22 18:43:34 2024 00:36:23.701 read: IOPS=142, BW=569KiB/s (583kB/s)(5696KiB/10012msec) 00:36:23.701 slat (usec): min=5, max=8041, avg=29.47, stdev=281.41 00:36:23.701 clat (msec): min=13, max=201, avg=112.16, stdev=28.14 00:36:23.701 lat (msec): min=13, max=201, avg=112.19, stdev=28.13 00:36:23.701 clat percentiles (msec): 00:36:23.701 | 1.00th=[ 16], 5.00th=[ 73], 10.00th=[ 86], 20.00th=[ 96], 00:36:23.701 | 30.00th=[ 99], 40.00th=[ 104], 50.00th=[ 107], 60.00th=[ 116], 00:36:23.701 | 70.00th=[ 122], 80.00th=[ 132], 90.00th=[ 150], 95.00th=[ 161], 00:36:23.701 | 99.00th=[ 199], 99.50th=[ 201], 99.90th=[ 203], 99.95th=[ 203], 00:36:23.701 | 99.99th=[ 203] 00:36:23.701 bw ( KiB/s): min= 384, max= 688, per=3.64%, avg=552.32, stdev=77.96, samples=19 00:36:23.701 iops : min= 96, max= 172, avg=138.05, stdev=19.51, samples=19 00:36:23.701 lat (msec) : 20=1.12%, 50=1.12%, 100=32.94%, 250=64.82% 00:36:23.701 cpu : usr=40.46%, sys=1.03%, ctx=1307, majf=0, minf=1636 00:36:23.701 IO depths : 1=3.7%, 2=8.2%, 4=20.1%, 8=59.2%, 16=8.8%, 32=0.0%, >=64=0.0% 00:36:23.701 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:23.701 complete : 0=0.0%, 4=92.6%, 8=1.6%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:23.701 issued rwts: total=1424,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:23.701 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:23.701 filename1: (groupid=0, jobs=1): err= 0: pid=109850: Mon Jul 22 18:43:34 2024 00:36:23.701 read: IOPS=183, BW=734KiB/s (751kB/s)(7440KiB/10139msec) 00:36:23.701 slat (usec): min=6, max=8055, avg=24.86, stdev=267.88 00:36:23.701 clat (msec): min=6, max=180, avg=86.67, stdev=32.66 00:36:23.701 lat (msec): min=6, max=180, avg=86.69, stdev=32.66 00:36:23.701 clat percentiles (msec): 00:36:23.701 | 1.00th=[ 8], 5.00th=[ 19], 10.00th=[ 56], 20.00th=[ 62], 00:36:23.701 | 30.00th=[ 72], 40.00th=[ 75], 50.00th=[ 85], 60.00th=[ 96], 00:36:23.701 | 70.00th=[ 97], 80.00th=[ 108], 90.00th=[ 132], 95.00th=[ 146], 00:36:23.701 | 99.00th=[ 165], 99.50th=[ 180], 99.90th=[ 180], 99.95th=[ 180], 00:36:23.701 | 99.99th=[ 180] 00:36:23.701 bw ( KiB/s): min= 512, max= 1536, per=4.86%, avg=737.50, stdev=215.32, samples=20 00:36:23.701 iops : min= 128, max= 384, avg=184.35, stdev=53.82, samples=20 00:36:23.701 lat (msec) : 10=2.47%, 20=3.06%, 50=3.28%, 100=63.82%, 250=27.37% 00:36:23.701 cpu : usr=33.03%, sys=0.80%, ctx=892, majf=0, minf=1637 00:36:23.701 IO depths : 1=1.5%, 2=3.1%, 4=10.9%, 8=72.6%, 16=12.0%, 32=0.0%, >=64=0.0% 00:36:23.701 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:23.701 complete : 0=0.0%, 4=90.2%, 8=5.1%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:23.701 issued rwts: total=1860,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:23.701 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:23.701 filename1: (groupid=0, jobs=1): err= 0: pid=109851: Mon Jul 22 18:43:34 2024 00:36:23.701 read: IOPS=143, BW=576KiB/s (589kB/s)(5760KiB/10008msec) 00:36:23.701 slat (usec): min=5, max=9030, avg=37.86, stdev=381.43 00:36:23.701 clat (msec): min=8, max=215, avg=110.84, stdev=29.26 00:36:23.701 lat (msec): min=8, max=215, avg=110.88, stdev=29.27 00:36:23.701 clat percentiles (msec): 00:36:23.701 | 1.00th=[ 9], 5.00th=[ 66], 10.00th=[ 85], 20.00th=[ 96], 00:36:23.701 | 30.00th=[ 99], 40.00th=[ 104], 50.00th=[ 108], 60.00th=[ 114], 00:36:23.701 | 70.00th=[ 121], 80.00th=[ 130], 90.00th=[ 146], 95.00th=[ 159], 00:36:23.701 | 99.00th=[ 192], 99.50th=[ 203], 99.90th=[ 215], 99.95th=[ 215], 00:36:23.701 | 99.99th=[ 215] 00:36:23.701 bw ( KiB/s): min= 440, max= 768, per=3.73%, avg=565.16, stdev=82.12, samples=19 00:36:23.701 iops : min= 110, max= 192, avg=141.16, stdev=20.59, samples=19 00:36:23.701 lat (msec) : 10=1.11%, 20=0.14%, 50=1.25%, 100=32.64%, 250=64.86% 00:36:23.701 cpu : usr=39.41%, sys=0.96%, ctx=1249, majf=0, minf=1634 00:36:23.701 IO depths : 1=3.1%, 2=6.9%, 4=17.1%, 8=63.2%, 16=9.7%, 32=0.0%, >=64=0.0% 00:36:23.701 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:23.701 complete : 0=0.0%, 4=92.0%, 8=2.6%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:23.701 issued rwts: total=1440,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:23.701 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:23.701 filename1: (groupid=0, jobs=1): err= 0: pid=109852: Mon Jul 22 18:43:34 2024 00:36:23.701 read: IOPS=138, BW=556KiB/s (569kB/s)(5568KiB/10021msec) 00:36:23.701 slat (nsec): min=4528, max=81090, avg=16365.02, stdev=7864.13 00:36:23.701 clat (msec): min=59, max=227, avg=114.90, stdev=28.64 00:36:23.701 lat (msec): min=59, max=228, avg=114.91, stdev=28.64 00:36:23.701 clat percentiles (msec): 00:36:23.701 | 1.00th=[ 66], 5.00th=[ 77], 10.00th=[ 91], 20.00th=[ 96], 00:36:23.701 | 30.00th=[ 101], 40.00th=[ 104], 50.00th=[ 107], 60.00th=[ 111], 00:36:23.701 | 70.00th=[ 124], 80.00th=[ 138], 90.00th=[ 153], 95.00th=[ 176], 00:36:23.701 | 99.00th=[ 205], 99.50th=[ 211], 99.90th=[ 228], 99.95th=[ 228], 00:36:23.701 | 99.99th=[ 228] 00:36:23.701 bw ( KiB/s): min= 384, max= 640, per=3.64%, avg=551.84, stdev=71.29, samples=19 00:36:23.701 iops : min= 96, max= 160, avg=137.95, stdev=17.81, samples=19 00:36:23.701 lat (msec) : 100=30.96%, 250=69.04% 00:36:23.701 cpu : usr=46.60%, sys=0.90%, ctx=1241, majf=0, minf=1636 00:36:23.701 IO depths : 1=4.2%, 2=9.1%, 4=20.8%, 8=57.5%, 16=8.4%, 32=0.0%, >=64=0.0% 00:36:23.701 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:23.701 complete : 0=0.0%, 4=92.9%, 8=1.4%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:23.701 issued rwts: total=1392,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:23.701 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:23.701 filename1: (groupid=0, jobs=1): err= 0: pid=109853: Mon Jul 22 18:43:34 2024 00:36:23.701 read: IOPS=146, BW=588KiB/s (602kB/s)(5896KiB/10031msec) 00:36:23.701 slat (usec): min=4, max=8042, avg=33.79, stdev=360.70 00:36:23.701 clat (msec): min=36, max=230, avg=108.60, stdev=29.90 00:36:23.701 lat (msec): min=37, max=230, avg=108.63, stdev=29.90 00:36:23.701 clat percentiles (msec): 00:36:23.701 | 1.00th=[ 51], 5.00th=[ 65], 10.00th=[ 71], 20.00th=[ 87], 00:36:23.701 | 30.00th=[ 96], 40.00th=[ 99], 50.00th=[ 105], 60.00th=[ 110], 00:36:23.701 | 70.00th=[ 121], 80.00th=[ 131], 90.00th=[ 155], 95.00th=[ 161], 00:36:23.701 | 99.00th=[ 215], 99.50th=[ 218], 99.90th=[ 232], 99.95th=[ 232], 00:36:23.701 | 99.99th=[ 232] 00:36:23.701 bw ( KiB/s): min= 384, max= 768, per=3.87%, avg=586.32, stdev=106.80, samples=19 00:36:23.701 iops : min= 96, max= 192, avg=146.53, stdev=26.68, samples=19 00:36:23.701 lat (msec) : 50=0.95%, 100=40.98%, 250=58.07% 00:36:23.701 cpu : usr=36.89%, sys=0.98%, ctx=1103, majf=0, minf=1636 00:36:23.701 IO depths : 1=2.1%, 2=5.2%, 4=14.5%, 8=67.2%, 16=11.1%, 32=0.0%, >=64=0.0% 00:36:23.701 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:23.701 complete : 0=0.0%, 4=91.4%, 8=3.6%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:23.701 issued rwts: total=1474,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:23.701 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:23.701 filename1: (groupid=0, jobs=1): err= 0: pid=109854: Mon Jul 22 18:43:34 2024 00:36:23.701 read: IOPS=140, BW=560KiB/s (574kB/s)(5612KiB/10013msec) 00:36:23.701 slat (usec): min=6, max=8038, avg=21.45, stdev=214.34 00:36:23.701 clat (msec): min=48, max=207, avg=114.06, stdev=27.30 00:36:23.701 lat (msec): min=48, max=207, avg=114.08, stdev=27.31 00:36:23.701 clat percentiles (msec): 00:36:23.701 | 1.00th=[ 60], 5.00th=[ 72], 10.00th=[ 85], 20.00th=[ 96], 00:36:23.701 | 30.00th=[ 97], 40.00th=[ 105], 50.00th=[ 108], 60.00th=[ 118], 00:36:23.701 | 70.00th=[ 124], 80.00th=[ 134], 90.00th=[ 157], 95.00th=[ 165], 00:36:23.701 | 99.00th=[ 186], 99.50th=[ 192], 99.90th=[ 209], 99.95th=[ 209], 00:36:23.701 | 99.99th=[ 209] 00:36:23.701 bw ( KiB/s): min= 400, max= 704, per=3.67%, avg=556.84, stdev=89.29, samples=19 00:36:23.701 iops : min= 100, max= 176, avg=139.16, stdev=22.27, samples=19 00:36:23.701 lat (msec) : 50=0.29%, 100=37.99%, 250=61.72% 00:36:23.701 cpu : usr=32.60%, sys=0.73%, ctx=858, majf=0, minf=1634 00:36:23.701 IO depths : 1=2.9%, 2=6.5%, 4=16.3%, 8=64.2%, 16=10.1%, 32=0.0%, >=64=0.0% 00:36:23.701 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:23.701 complete : 0=0.0%, 4=91.9%, 8=2.8%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:23.702 issued rwts: total=1403,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:23.702 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:23.702 filename1: (groupid=0, jobs=1): err= 0: pid=109855: Mon Jul 22 18:43:34 2024 00:36:23.702 read: IOPS=157, BW=629KiB/s (644kB/s)(6320KiB/10049msec) 00:36:23.702 slat (usec): min=4, max=4036, avg=17.97, stdev=101.44 00:36:23.702 clat (msec): min=46, max=212, avg=101.65, stdev=28.36 00:36:23.702 lat (msec): min=46, max=212, avg=101.67, stdev=28.36 00:36:23.702 clat percentiles (msec): 00:36:23.702 | 1.00th=[ 48], 5.00th=[ 61], 10.00th=[ 70], 20.00th=[ 75], 00:36:23.702 | 30.00th=[ 85], 40.00th=[ 95], 50.00th=[ 99], 60.00th=[ 108], 00:36:23.702 | 70.00th=[ 110], 80.00th=[ 121], 90.00th=[ 138], 95.00th=[ 157], 00:36:23.702 | 99.00th=[ 192], 99.50th=[ 209], 99.90th=[ 213], 99.95th=[ 213], 00:36:23.702 | 99.99th=[ 213] 00:36:23.702 bw ( KiB/s): min= 512, max= 864, per=4.12%, avg=625.30, stdev=105.30, samples=20 00:36:23.702 iops : min= 128, max= 216, avg=156.25, stdev=26.30, samples=20 00:36:23.702 lat (msec) : 50=1.39%, 100=50.38%, 250=48.23% 00:36:23.702 cpu : usr=36.68%, sys=0.71%, ctx=973, majf=0, minf=1636 00:36:23.702 IO depths : 1=0.9%, 2=1.9%, 4=8.7%, 8=75.4%, 16=13.0%, 32=0.0%, >=64=0.0% 00:36:23.702 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:23.702 complete : 0=0.0%, 4=89.8%, 8=6.0%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:23.702 issued rwts: total=1580,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:23.702 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:23.702 filename2: (groupid=0, jobs=1): err= 0: pid=109856: Mon Jul 22 18:43:34 2024 00:36:23.702 read: IOPS=141, BW=567KiB/s (581kB/s)(5696KiB/10040msec) 00:36:23.702 slat (usec): min=4, max=8034, avg=23.70, stdev=236.97 00:36:23.702 clat (msec): min=60, max=191, avg=112.45, stdev=24.82 00:36:23.702 lat (msec): min=60, max=191, avg=112.48, stdev=24.82 00:36:23.702 clat percentiles (msec): 00:36:23.702 | 1.00th=[ 63], 5.00th=[ 72], 10.00th=[ 85], 20.00th=[ 95], 00:36:23.702 | 30.00th=[ 97], 40.00th=[ 105], 50.00th=[ 108], 60.00th=[ 117], 00:36:23.702 | 70.00th=[ 122], 80.00th=[ 132], 90.00th=[ 148], 95.00th=[ 159], 00:36:23.702 | 99.00th=[ 180], 99.50th=[ 192], 99.90th=[ 192], 99.95th=[ 192], 00:36:23.702 | 99.99th=[ 192] 00:36:23.702 bw ( KiB/s): min= 384, max= 752, per=3.71%, avg=563.79, stdev=89.70, samples=19 00:36:23.702 iops : min= 96, max= 188, avg=140.95, stdev=22.42, samples=19 00:36:23.702 lat (msec) : 100=35.60%, 250=64.40% 00:36:23.702 cpu : usr=34.64%, sys=0.95%, ctx=985, majf=0, minf=1634 00:36:23.702 IO depths : 1=2.7%, 2=6.0%, 4=16.6%, 8=64.7%, 16=10.0%, 32=0.0%, >=64=0.0% 00:36:23.702 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:23.702 complete : 0=0.0%, 4=91.5%, 8=3.0%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:23.702 issued rwts: total=1424,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:23.702 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:23.702 filename2: (groupid=0, jobs=1): err= 0: pid=109857: Mon Jul 22 18:43:34 2024 00:36:23.702 read: IOPS=149, BW=597KiB/s (611kB/s)(6000KiB/10058msec) 00:36:23.702 slat (usec): min=6, max=4035, avg=28.97, stdev=222.88 00:36:23.702 clat (msec): min=48, max=247, avg=106.98, stdev=29.94 00:36:23.702 lat (msec): min=48, max=247, avg=107.00, stdev=29.95 00:36:23.702 clat percentiles (msec): 00:36:23.702 | 1.00th=[ 49], 5.00th=[ 64], 10.00th=[ 71], 20.00th=[ 82], 00:36:23.702 | 30.00th=[ 95], 40.00th=[ 99], 50.00th=[ 105], 60.00th=[ 108], 00:36:23.702 | 70.00th=[ 121], 80.00th=[ 136], 90.00th=[ 146], 95.00th=[ 155], 00:36:23.702 | 99.00th=[ 197], 99.50th=[ 201], 99.90th=[ 249], 99.95th=[ 249], 00:36:23.702 | 99.99th=[ 249] 00:36:23.702 bw ( KiB/s): min= 384, max= 766, per=3.91%, avg=592.75, stdev=107.28, samples=20 00:36:23.702 iops : min= 96, max= 191, avg=148.10, stdev=26.78, samples=20 00:36:23.702 lat (msec) : 50=1.07%, 100=42.20%, 250=56.73% 00:36:23.702 cpu : usr=41.65%, sys=0.99%, ctx=1373, majf=0, minf=1637 00:36:23.702 IO depths : 1=3.9%, 2=8.5%, 4=20.1%, 8=58.7%, 16=8.9%, 32=0.0%, >=64=0.0% 00:36:23.702 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:23.702 complete : 0=0.0%, 4=92.6%, 8=1.9%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:23.702 issued rwts: total=1500,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:23.702 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:23.702 filename2: (groupid=0, jobs=1): err= 0: pid=109858: Mon Jul 22 18:43:34 2024 00:36:23.702 read: IOPS=155, BW=624KiB/s (639kB/s)(6256KiB/10032msec) 00:36:23.702 slat (usec): min=4, max=7034, avg=27.16, stdev=244.35 00:36:23.702 clat (msec): min=55, max=190, avg=102.28, stdev=24.78 00:36:23.702 lat (msec): min=55, max=190, avg=102.31, stdev=24.77 00:36:23.702 clat percentiles (msec): 00:36:23.702 | 1.00th=[ 58], 5.00th=[ 64], 10.00th=[ 71], 20.00th=[ 79], 00:36:23.702 | 30.00th=[ 90], 40.00th=[ 96], 50.00th=[ 102], 60.00th=[ 107], 00:36:23.702 | 70.00th=[ 113], 80.00th=[ 121], 90.00th=[ 136], 95.00th=[ 148], 00:36:23.702 | 99.00th=[ 167], 99.50th=[ 182], 99.90th=[ 190], 99.95th=[ 190], 00:36:23.702 | 99.99th=[ 190] 00:36:23.702 bw ( KiB/s): min= 432, max= 848, per=4.11%, avg=623.63, stdev=110.48, samples=19 00:36:23.702 iops : min= 108, max= 212, avg=155.84, stdev=27.66, samples=19 00:36:23.702 lat (msec) : 100=46.10%, 250=53.90% 00:36:23.702 cpu : usr=43.27%, sys=1.18%, ctx=1248, majf=0, minf=1636 00:36:23.702 IO depths : 1=3.6%, 2=7.7%, 4=18.0%, 8=61.6%, 16=9.1%, 32=0.0%, >=64=0.0% 00:36:23.702 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:23.702 complete : 0=0.0%, 4=92.1%, 8=2.3%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:23.702 issued rwts: total=1564,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:23.702 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:23.702 filename2: (groupid=0, jobs=1): err= 0: pid=109859: Mon Jul 22 18:43:34 2024 00:36:23.702 read: IOPS=145, BW=581KiB/s (595kB/s)(5836KiB/10037msec) 00:36:23.702 slat (usec): min=5, max=8029, avg=21.31, stdev=209.95 00:36:23.702 clat (msec): min=47, max=195, avg=109.73, stdev=27.37 00:36:23.702 lat (msec): min=47, max=195, avg=109.75, stdev=27.37 00:36:23.702 clat percentiles (msec): 00:36:23.702 | 1.00th=[ 52], 5.00th=[ 63], 10.00th=[ 77], 20.00th=[ 93], 00:36:23.702 | 30.00th=[ 96], 40.00th=[ 102], 50.00th=[ 107], 60.00th=[ 110], 00:36:23.702 | 70.00th=[ 118], 80.00th=[ 132], 90.00th=[ 153], 95.00th=[ 161], 00:36:23.702 | 99.00th=[ 174], 99.50th=[ 192], 99.90th=[ 197], 99.95th=[ 197], 00:36:23.702 | 99.99th=[ 197] 00:36:23.702 bw ( KiB/s): min= 440, max= 696, per=3.83%, avg=580.42, stdev=73.09, samples=19 00:36:23.702 iops : min= 110, max= 174, avg=145.05, stdev=18.27, samples=19 00:36:23.702 lat (msec) : 50=0.62%, 100=35.85%, 250=63.54% 00:36:23.702 cpu : usr=38.37%, sys=0.86%, ctx=1126, majf=0, minf=1636 00:36:23.702 IO depths : 1=3.7%, 2=7.9%, 4=18.3%, 8=61.2%, 16=8.9%, 32=0.0%, >=64=0.0% 00:36:23.702 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:23.702 complete : 0=0.0%, 4=92.3%, 8=2.1%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:23.702 issued rwts: total=1459,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:23.702 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:23.702 filename2: (groupid=0, jobs=1): err= 0: pid=109860: Mon Jul 22 18:43:34 2024 00:36:23.702 read: IOPS=175, BW=701KiB/s (718kB/s)(7092KiB/10110msec) 00:36:23.702 slat (usec): min=4, max=8040, avg=29.11, stdev=301.28 00:36:23.702 clat (msec): min=20, max=188, avg=90.87, stdev=26.73 00:36:23.702 lat (msec): min=20, max=188, avg=90.90, stdev=26.75 00:36:23.702 clat percentiles (msec): 00:36:23.702 | 1.00th=[ 26], 5.00th=[ 57], 10.00th=[ 61], 20.00th=[ 68], 00:36:23.702 | 30.00th=[ 74], 40.00th=[ 81], 50.00th=[ 91], 60.00th=[ 97], 00:36:23.702 | 70.00th=[ 104], 80.00th=[ 111], 90.00th=[ 128], 95.00th=[ 136], 00:36:23.702 | 99.00th=[ 161], 99.50th=[ 188], 99.90th=[ 190], 99.95th=[ 190], 00:36:23.702 | 99.99th=[ 190] 00:36:23.702 bw ( KiB/s): min= 512, max= 952, per=4.61%, avg=699.10, stdev=116.69, samples=20 00:36:23.702 iops : min= 128, max= 238, avg=174.70, stdev=29.15, samples=20 00:36:23.702 lat (msec) : 50=2.76%, 100=63.62%, 250=33.62% 00:36:23.702 cpu : usr=41.11%, sys=1.01%, ctx=1337, majf=0, minf=1637 00:36:23.702 IO depths : 1=1.3%, 2=3.0%, 4=9.8%, 8=73.5%, 16=12.4%, 32=0.0%, >=64=0.0% 00:36:23.702 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:23.702 complete : 0=0.0%, 4=90.2%, 8=5.7%, 16=4.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:23.702 issued rwts: total=1773,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:23.702 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:23.702 filename2: (groupid=0, jobs=1): err= 0: pid=109861: Mon Jul 22 18:43:34 2024 00:36:23.702 read: IOPS=237, BW=950KiB/s (973kB/s)(9560KiB/10062msec) 00:36:23.702 slat (usec): min=4, max=8043, avg=18.89, stdev=184.08 00:36:23.702 clat (msec): min=2, max=172, avg=67.13, stdev=41.47 00:36:23.702 lat (msec): min=2, max=172, avg=67.15, stdev=41.48 00:36:23.702 clat percentiles (msec): 00:36:23.702 | 1.00th=[ 3], 5.00th=[ 3], 10.00th=[ 3], 20.00th=[ 7], 00:36:23.702 | 30.00th=[ 61], 40.00th=[ 64], 50.00th=[ 72], 60.00th=[ 82], 00:36:23.702 | 70.00th=[ 88], 80.00th=[ 96], 90.00th=[ 117], 95.00th=[ 138], 00:36:23.702 | 99.00th=[ 157], 99.50th=[ 163], 99.90th=[ 174], 99.95th=[ 174], 00:36:23.702 | 99.99th=[ 174] 00:36:23.702 bw ( KiB/s): min= 512, max= 5120, per=6.26%, avg=949.40, stdev=987.48, samples=20 00:36:23.702 iops : min= 128, max= 1280, avg=237.30, stdev=246.88, samples=20 00:36:23.702 lat (msec) : 4=16.82%, 10=4.60%, 20=2.01%, 50=1.59%, 100=56.78% 00:36:23.702 lat (msec) : 250=18.20% 00:36:23.702 cpu : usr=37.73%, sys=1.02%, ctx=1090, majf=0, minf=1635 00:36:23.702 IO depths : 1=2.1%, 2=4.3%, 4=11.9%, 8=70.5%, 16=11.1%, 32=0.0%, >=64=0.0% 00:36:23.702 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:23.702 complete : 0=0.0%, 4=90.4%, 8=4.7%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:23.702 issued rwts: total=2390,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:23.702 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:23.702 filename2: (groupid=0, jobs=1): err= 0: pid=109862: Mon Jul 22 18:43:34 2024 00:36:23.702 read: IOPS=148, BW=593KiB/s (607kB/s)(5956KiB/10050msec) 00:36:23.702 slat (nsec): min=4541, max=57495, avg=14727.43, stdev=6330.62 00:36:23.702 clat (msec): min=45, max=211, avg=107.80, stdev=25.71 00:36:23.702 lat (msec): min=45, max=211, avg=107.81, stdev=25.71 00:36:23.702 clat percentiles (msec): 00:36:23.702 | 1.00th=[ 50], 5.00th=[ 67], 10.00th=[ 72], 20.00th=[ 92], 00:36:23.702 | 30.00th=[ 96], 40.00th=[ 99], 50.00th=[ 107], 60.00th=[ 108], 00:36:23.703 | 70.00th=[ 121], 80.00th=[ 125], 90.00th=[ 144], 95.00th=[ 157], 00:36:23.703 | 99.00th=[ 184], 99.50th=[ 194], 99.90th=[ 213], 99.95th=[ 213], 00:36:23.703 | 99.99th=[ 213] 00:36:23.703 bw ( KiB/s): min= 440, max= 848, per=3.88%, avg=588.80, stdev=92.54, samples=20 00:36:23.703 iops : min= 110, max= 212, avg=147.10, stdev=23.10, samples=20 00:36:23.703 lat (msec) : 50=1.21%, 100=39.83%, 250=58.97% 00:36:23.703 cpu : usr=33.72%, sys=0.90%, ctx=909, majf=0, minf=1634 00:36:23.703 IO depths : 1=2.4%, 2=5.3%, 4=14.8%, 8=66.9%, 16=10.6%, 32=0.0%, >=64=0.0% 00:36:23.703 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:23.703 complete : 0=0.0%, 4=91.4%, 8=3.4%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:23.703 issued rwts: total=1489,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:23.703 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:23.703 filename2: (groupid=0, jobs=1): err= 0: pid=109863: Mon Jul 22 18:43:34 2024 00:36:23.703 read: IOPS=155, BW=623KiB/s (638kB/s)(6256KiB/10048msec) 00:36:23.703 slat (usec): min=4, max=8038, avg=20.16, stdev=203.02 00:36:23.703 clat (msec): min=45, max=204, avg=102.32, stdev=27.33 00:36:23.703 lat (msec): min=45, max=204, avg=102.34, stdev=27.33 00:36:23.703 clat percentiles (msec): 00:36:23.703 | 1.00th=[ 54], 5.00th=[ 63], 10.00th=[ 71], 20.00th=[ 78], 00:36:23.703 | 30.00th=[ 85], 40.00th=[ 95], 50.00th=[ 101], 60.00th=[ 107], 00:36:23.703 | 70.00th=[ 111], 80.00th=[ 125], 90.00th=[ 142], 95.00th=[ 146], 00:36:23.703 | 99.00th=[ 178], 99.50th=[ 199], 99.90th=[ 205], 99.95th=[ 205], 00:36:23.703 | 99.99th=[ 205] 00:36:23.703 bw ( KiB/s): min= 508, max= 824, per=4.10%, avg=622.30, stdev=103.64, samples=20 00:36:23.703 iops : min= 127, max= 206, avg=155.50, stdev=25.95, samples=20 00:36:23.703 lat (msec) : 50=0.90%, 100=48.53%, 250=50.58% 00:36:23.703 cpu : usr=39.20%, sys=0.76%, ctx=1068, majf=0, minf=1636 00:36:23.703 IO depths : 1=2.2%, 2=4.9%, 4=13.7%, 8=68.3%, 16=11.0%, 32=0.0%, >=64=0.0% 00:36:23.703 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:23.703 complete : 0=0.0%, 4=90.9%, 8=4.1%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:23.703 issued rwts: total=1564,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:23.703 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:23.703 00:36:23.703 Run status group 0 (all jobs): 00:36:23.703 READ: bw=14.8MiB/s (15.5MB/s), 556KiB/s-950KiB/s (569kB/s-973kB/s), io=150MiB (157MB), run=10002-10139msec 00:36:24.269 ----------------------------------------------------- 00:36:24.269 Suppressions used: 00:36:24.269 count bytes template 00:36:24.269 45 402 /usr/src/fio/parse.c 00:36:24.269 1 8 libtcmalloc_minimal.so 00:36:24.269 1 904 libcrypto.so 00:36:24.269 ----------------------------------------------------- 00:36:24.269 00:36:24.528 18:43:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:36:24.528 18:43:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:36:24.528 18:43:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:24.528 18:43:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:24.528 18:43:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:36:24.528 18:43:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:24.528 18:43:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:24.528 18:43:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:24.528 18:43:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:24.528 18:43:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:24.528 18:43:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:24.528 18:43:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:24.528 18:43:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:24.528 18:43:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:24.529 18:43:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:36:24.529 18:43:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:36:24.529 18:43:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:24.529 18:43:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:24.529 18:43:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:24.529 18:43:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:24.529 18:43:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:36:24.529 18:43:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:24.529 18:43:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:24.529 18:43:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:24.529 18:43:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:24.529 18:43:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:36:24.529 18:43:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:36:24.529 18:43:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:36:24.529 18:43:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:24.529 18:43:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:24.529 18:43:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:24.529 18:43:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:36:24.529 18:43:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:24.529 18:43:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:24.529 18:43:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:24.529 18:43:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:36:24.529 18:43:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:36:24.529 18:43:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:36:24.529 18:43:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:36:24.529 18:43:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:36:24.529 18:43:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:36:24.529 18:43:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:36:24.529 18:43:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:36:24.529 18:43:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:24.529 18:43:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:36:24.529 18:43:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:36:24.529 18:43:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:36:24.529 18:43:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:24.529 18:43:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:24.529 bdev_null0 00:36:24.529 18:43:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:24.529 18:43:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:24.529 18:43:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:24.529 18:43:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:24.529 18:43:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:24.529 18:43:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:24.529 18:43:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:24.529 18:43:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:24.529 18:43:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:24.529 18:43:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:24.529 18:43:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:24.529 18:43:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:24.529 [2024-07-22 18:43:36.379081] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:24.529 18:43:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:24.529 18:43:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:24.529 18:43:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:36:24.529 18:43:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:36:24.529 18:43:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:36:24.529 18:43:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:24.529 18:43:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:24.529 bdev_null1 00:36:24.529 18:43:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:24.529 18:43:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:36:24.529 18:43:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:24.529 18:43:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:24.529 18:43:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:24.529 18:43:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:36:24.529 18:43:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:24.529 18:43:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:24.529 18:43:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:24.529 18:43:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:24.529 18:43:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:24.529 18:43:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:24.529 18:43:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:24.529 18:43:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:36:24.529 18:43:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:36:24.529 18:43:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:36:24.529 18:43:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:24.529 18:43:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:36:24.529 18:43:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:24.529 18:43:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:36:24.529 18:43:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:36:24.529 18:43:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:24.529 18:43:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:24.529 18:43:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:36:24.529 18:43:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:36:24.529 18:43:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:24.529 { 00:36:24.529 "params": { 00:36:24.529 "name": "Nvme$subsystem", 00:36:24.529 "trtype": "$TEST_TRANSPORT", 00:36:24.529 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:24.529 "adrfam": "ipv4", 00:36:24.529 "trsvcid": "$NVMF_PORT", 00:36:24.529 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:24.529 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:24.529 "hdgst": ${hdgst:-false}, 00:36:24.529 "ddgst": ${ddgst:-false} 00:36:24.529 }, 00:36:24.529 "method": "bdev_nvme_attach_controller" 00:36:24.529 } 00:36:24.529 EOF 00:36:24.529 )") 00:36:24.529 18:43:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:36:24.529 18:43:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:36:24.529 18:43:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:36:24.529 18:43:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:36:24.529 18:43:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:36:24.529 18:43:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:36:24.529 18:43:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:36:24.529 18:43:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:36:24.529 18:43:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:36:24.529 18:43:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:36:24.529 18:43:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:36:24.529 18:43:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:24.529 18:43:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:36:24.529 18:43:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:24.529 18:43:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:24.529 { 00:36:24.529 "params": { 00:36:24.529 "name": "Nvme$subsystem", 00:36:24.529 "trtype": "$TEST_TRANSPORT", 00:36:24.529 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:24.529 "adrfam": "ipv4", 00:36:24.529 "trsvcid": "$NVMF_PORT", 00:36:24.529 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:24.529 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:24.529 "hdgst": ${hdgst:-false}, 00:36:24.529 "ddgst": ${ddgst:-false} 00:36:24.529 }, 00:36:24.529 "method": "bdev_nvme_attach_controller" 00:36:24.529 } 00:36:24.529 EOF 00:36:24.529 )") 00:36:24.529 18:43:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:36:24.529 18:43:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:36:24.529 18:43:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:24.529 18:43:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:36:24.529 18:43:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:36:24.529 18:43:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:36:24.529 "params": { 00:36:24.529 "name": "Nvme0", 00:36:24.529 "trtype": "tcp", 00:36:24.529 "traddr": "10.0.0.2", 00:36:24.529 "adrfam": "ipv4", 00:36:24.530 "trsvcid": "4420", 00:36:24.530 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:24.530 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:24.530 "hdgst": false, 00:36:24.530 "ddgst": false 00:36:24.530 }, 00:36:24.530 "method": "bdev_nvme_attach_controller" 00:36:24.530 },{ 00:36:24.530 "params": { 00:36:24.530 "name": "Nvme1", 00:36:24.530 "trtype": "tcp", 00:36:24.530 "traddr": "10.0.0.2", 00:36:24.530 "adrfam": "ipv4", 00:36:24.530 "trsvcid": "4420", 00:36:24.530 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:24.530 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:24.530 "hdgst": false, 00:36:24.530 "ddgst": false 00:36:24.530 }, 00:36:24.530 "method": "bdev_nvme_attach_controller" 00:36:24.530 }' 00:36:24.530 18:43:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:36:24.530 18:43:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:36:24.530 18:43:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # break 00:36:24.530 18:43:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:36:24.530 18:43:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:24.788 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:36:24.788 ... 00:36:24.788 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:36:24.788 ... 00:36:24.788 fio-3.35 00:36:24.788 Starting 4 threads 00:36:31.363 00:36:31.363 filename0: (groupid=0, jobs=1): err= 0: pid=109995: Mon Jul 22 18:43:42 2024 00:36:31.363 read: IOPS=1544, BW=12.1MiB/s (12.7MB/s)(60.4MiB/5002msec) 00:36:31.363 slat (nsec): min=6463, max=55085, avg=15005.79, stdev=5551.54 00:36:31.363 clat (usec): min=1461, max=8096, avg=5095.07, stdev=173.42 00:36:31.363 lat (usec): min=1471, max=8108, avg=5110.07, stdev=174.56 00:36:31.363 clat percentiles (usec): 00:36:31.363 | 1.00th=[ 4948], 5.00th=[ 4948], 10.00th=[ 5014], 20.00th=[ 5014], 00:36:31.363 | 30.00th=[ 5080], 40.00th=[ 5080], 50.00th=[ 5080], 60.00th=[ 5145], 00:36:31.363 | 70.00th=[ 5145], 80.00th=[ 5145], 90.00th=[ 5211], 95.00th=[ 5276], 00:36:31.363 | 99.00th=[ 5342], 99.50th=[ 5407], 99.90th=[ 5932], 99.95th=[ 5997], 00:36:31.363 | 99.99th=[ 8094] 00:36:31.363 bw ( KiB/s): min=12056, max=12544, per=25.07%, avg=12376.00, stdev=150.31, samples=9 00:36:31.363 iops : min= 1507, max= 1568, avg=1547.00, stdev=18.79, samples=9 00:36:31.363 lat (msec) : 2=0.10%, 4=0.12%, 10=99.78% 00:36:31.363 cpu : usr=93.74%, sys=5.04%, ctx=9, majf=0, minf=1637 00:36:31.363 IO depths : 1=12.4%, 2=25.0%, 4=50.0%, 8=12.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:31.363 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:31.363 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:31.363 issued rwts: total=7728,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:31.363 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:31.363 filename0: (groupid=0, jobs=1): err= 0: pid=109996: Mon Jul 22 18:43:42 2024 00:36:31.363 read: IOPS=1542, BW=12.0MiB/s (12.6MB/s)(60.2MiB/5001msec) 00:36:31.363 slat (usec): min=6, max=116, avg=14.84, stdev= 5.36 00:36:31.363 clat (usec): min=3846, max=9419, avg=5115.80, stdev=167.02 00:36:31.363 lat (usec): min=3863, max=9442, avg=5130.64, stdev=167.08 00:36:31.363 clat percentiles (usec): 00:36:31.363 | 1.00th=[ 4948], 5.00th=[ 4948], 10.00th=[ 5014], 20.00th=[ 5014], 00:36:31.363 | 30.00th=[ 5080], 40.00th=[ 5080], 50.00th=[ 5080], 60.00th=[ 5145], 00:36:31.363 | 70.00th=[ 5145], 80.00th=[ 5211], 90.00th=[ 5211], 95.00th=[ 5276], 00:36:31.363 | 99.00th=[ 5342], 99.50th=[ 5407], 99.90th=[ 9372], 99.95th=[ 9372], 00:36:31.363 | 99.99th=[ 9372] 00:36:31.363 bw ( KiB/s): min=12032, max=12416, per=25.03%, avg=12359.11, stdev=129.77, samples=9 00:36:31.363 iops : min= 1504, max= 1552, avg=1544.89, stdev=16.22, samples=9 00:36:31.363 lat (msec) : 4=0.03%, 10=99.97% 00:36:31.363 cpu : usr=94.04%, sys=4.82%, ctx=6, majf=0, minf=1637 00:36:31.363 IO depths : 1=12.5%, 2=25.0%, 4=50.0%, 8=12.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:31.363 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:31.363 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:31.363 issued rwts: total=7712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:31.363 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:31.363 filename1: (groupid=0, jobs=1): err= 0: pid=109997: Mon Jul 22 18:43:42 2024 00:36:31.363 read: IOPS=1541, BW=12.0MiB/s (12.6MB/s)(60.2MiB/5002msec) 00:36:31.363 slat (nsec): min=6288, max=54135, avg=11629.61, stdev=4231.75 00:36:31.363 clat (usec): min=3779, max=11505, avg=5125.51, stdev=234.88 00:36:31.363 lat (usec): min=3795, max=11549, avg=5137.14, stdev=235.21 00:36:31.363 clat percentiles (usec): 00:36:31.363 | 1.00th=[ 4948], 5.00th=[ 5014], 10.00th=[ 5014], 20.00th=[ 5080], 00:36:31.363 | 30.00th=[ 5080], 40.00th=[ 5080], 50.00th=[ 5080], 60.00th=[ 5145], 00:36:31.363 | 70.00th=[ 5145], 80.00th=[ 5211], 90.00th=[ 5211], 95.00th=[ 5276], 00:36:31.363 | 99.00th=[ 5407], 99.50th=[ 5407], 99.90th=[11469], 99.95th=[11469], 00:36:31.363 | 99.99th=[11469] 00:36:31.363 bw ( KiB/s): min=12032, max=12544, per=25.03%, avg=12359.11, stdev=144.69, samples=9 00:36:31.363 iops : min= 1504, max= 1568, avg=1544.89, stdev=18.09, samples=9 00:36:31.363 lat (msec) : 4=0.23%, 10=99.66%, 20=0.10% 00:36:31.363 cpu : usr=93.90%, sys=4.96%, ctx=13, majf=0, minf=1637 00:36:31.363 IO depths : 1=12.4%, 2=25.0%, 4=50.0%, 8=12.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:31.363 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:31.363 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:31.363 issued rwts: total=7712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:31.363 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:31.363 filename1: (groupid=0, jobs=1): err= 0: pid=109998: Mon Jul 22 18:43:42 2024 00:36:31.363 read: IOPS=1543, BW=12.1MiB/s (12.6MB/s)(60.3MiB/5001msec) 00:36:31.363 slat (usec): min=4, max=117, avg=12.31, stdev= 5.19 00:36:31.363 clat (usec): min=2651, max=8382, avg=5116.17, stdev=160.87 00:36:31.363 lat (usec): min=2661, max=8392, avg=5128.48, stdev=161.41 00:36:31.363 clat percentiles (usec): 00:36:31.363 | 1.00th=[ 4948], 5.00th=[ 5014], 10.00th=[ 5014], 20.00th=[ 5014], 00:36:31.363 | 30.00th=[ 5080], 40.00th=[ 5080], 50.00th=[ 5080], 60.00th=[ 5145], 00:36:31.363 | 70.00th=[ 5145], 80.00th=[ 5211], 90.00th=[ 5211], 95.00th=[ 5276], 00:36:31.363 | 99.00th=[ 5407], 99.50th=[ 5473], 99.90th=[ 6849], 99.95th=[ 6915], 00:36:31.363 | 99.99th=[ 8356] 00:36:31.363 bw ( KiB/s): min=12032, max=12416, per=25.03%, avg=12359.11, stdev=129.77, samples=9 00:36:31.363 iops : min= 1504, max= 1552, avg=1544.89, stdev=16.22, samples=9 00:36:31.363 lat (msec) : 4=0.25%, 10=99.75% 00:36:31.363 cpu : usr=93.70%, sys=5.02%, ctx=45, majf=0, minf=1635 00:36:31.363 IO depths : 1=12.4%, 2=25.0%, 4=50.0%, 8=12.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:31.363 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:31.363 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:31.363 issued rwts: total=7720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:31.363 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:31.363 00:36:31.363 Run status group 0 (all jobs): 00:36:31.363 READ: bw=48.2MiB/s (50.6MB/s), 12.0MiB/s-12.1MiB/s (12.6MB/s-12.7MB/s), io=241MiB (253MB), run=5001-5002msec 00:36:32.296 ----------------------------------------------------- 00:36:32.296 Suppressions used: 00:36:32.296 count bytes template 00:36:32.296 6 52 /usr/src/fio/parse.c 00:36:32.296 1 8 libtcmalloc_minimal.so 00:36:32.296 1 904 libcrypto.so 00:36:32.296 ----------------------------------------------------- 00:36:32.296 00:36:32.296 18:43:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:36:32.296 18:43:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:36:32.296 18:43:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:32.296 18:43:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:32.296 18:43:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:36:32.296 18:43:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:32.296 18:43:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:32.296 18:43:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:32.296 18:43:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:32.296 18:43:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:32.296 18:43:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:32.296 18:43:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:32.296 18:43:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:32.296 18:43:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:32.296 18:43:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:36:32.296 18:43:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:36:32.296 18:43:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:32.296 18:43:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:32.296 18:43:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:32.296 18:43:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:32.296 18:43:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:36:32.296 18:43:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:32.296 18:43:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:32.296 18:43:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:32.296 ************************************ 00:36:32.296 END TEST fio_dif_rand_params 00:36:32.296 ************************************ 00:36:32.296 00:36:32.296 real 0m28.761s 00:36:32.296 user 2m11.366s 00:36:32.296 sys 0m5.469s 00:36:32.296 18:43:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:32.296 18:43:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:32.296 18:43:44 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:36:32.296 18:43:44 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:36:32.296 18:43:44 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:36:32.296 18:43:44 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:32.296 18:43:44 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:32.296 ************************************ 00:36:32.296 START TEST fio_dif_digest 00:36:32.296 ************************************ 00:36:32.296 18:43:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:36:32.296 18:43:44 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:36:32.296 18:43:44 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:36:32.296 18:43:44 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:36:32.296 18:43:44 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:36:32.296 18:43:44 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:36:32.296 18:43:44 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:36:32.296 18:43:44 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:36:32.296 18:43:44 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:36:32.296 18:43:44 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:36:32.296 18:43:44 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:36:32.296 18:43:44 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:36:32.296 18:43:44 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:36:32.296 18:43:44 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:36:32.296 18:43:44 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:36:32.296 18:43:44 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:36:32.296 18:43:44 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:36:32.296 18:43:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:32.296 18:43:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:32.296 bdev_null0 00:36:32.296 18:43:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:32.296 18:43:44 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:32.296 18:43:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:32.296 18:43:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:32.296 18:43:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:32.296 18:43:44 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:32.296 18:43:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:32.296 18:43:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:32.296 18:43:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:32.296 18:43:44 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:32.296 18:43:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:32.296 18:43:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:32.296 [2024-07-22 18:43:44.216122] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:32.296 18:43:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:32.296 18:43:44 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:36:32.296 18:43:44 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:36:32.296 18:43:44 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:32.296 18:43:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:32.296 18:43:44 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:36:32.296 18:43:44 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:36:32.296 18:43:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:36:32.296 18:43:44 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:36:32.296 18:43:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:32.296 18:43:44 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:36:32.296 18:43:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:36:32.296 18:43:44 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:36:32.296 18:43:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:36:32.296 18:43:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:36:32.296 18:43:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:36:32.296 18:43:44 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:36:32.296 18:43:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:36:32.296 18:43:44 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:32.296 18:43:44 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:32.296 { 00:36:32.296 "params": { 00:36:32.296 "name": "Nvme$subsystem", 00:36:32.296 "trtype": "$TEST_TRANSPORT", 00:36:32.296 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:32.296 "adrfam": "ipv4", 00:36:32.296 "trsvcid": "$NVMF_PORT", 00:36:32.296 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:32.296 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:32.296 "hdgst": ${hdgst:-false}, 00:36:32.296 "ddgst": ${ddgst:-false} 00:36:32.296 }, 00:36:32.296 "method": "bdev_nvme_attach_controller" 00:36:32.296 } 00:36:32.296 EOF 00:36:32.296 )") 00:36:32.296 18:43:44 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:36:32.296 18:43:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:36:32.296 18:43:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:36:32.296 18:43:44 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:36:32.296 18:43:44 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:36:32.296 18:43:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:36:32.296 18:43:44 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:36:32.296 18:43:44 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:36:32.296 18:43:44 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:36:32.296 "params": { 00:36:32.296 "name": "Nvme0", 00:36:32.296 "trtype": "tcp", 00:36:32.296 "traddr": "10.0.0.2", 00:36:32.296 "adrfam": "ipv4", 00:36:32.296 "trsvcid": "4420", 00:36:32.296 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:32.296 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:32.296 "hdgst": true, 00:36:32.296 "ddgst": true 00:36:32.296 }, 00:36:32.296 "method": "bdev_nvme_attach_controller" 00:36:32.296 }' 00:36:32.296 18:43:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:36:32.296 18:43:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:36:32.296 18:43:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # break 00:36:32.297 18:43:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:36:32.297 18:43:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:32.554 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:36:32.554 ... 00:36:32.554 fio-3.35 00:36:32.554 Starting 3 threads 00:36:44.800 00:36:44.800 filename0: (groupid=0, jobs=1): err= 0: pid=110105: Mon Jul 22 18:43:55 2024 00:36:44.800 read: IOPS=164, BW=20.6MiB/s (21.6MB/s)(206MiB/10003msec) 00:36:44.800 slat (nsec): min=6623, max=75257, avg=23247.50, stdev=7504.51 00:36:44.800 clat (usec): min=8174, max=25579, avg=18179.35, stdev=1698.43 00:36:44.800 lat (usec): min=8194, max=25597, avg=18202.59, stdev=1698.17 00:36:44.800 clat percentiles (usec): 00:36:44.800 | 1.00th=[12256], 5.00th=[15795], 10.00th=[16319], 20.00th=[16909], 00:36:44.800 | 30.00th=[17433], 40.00th=[17695], 50.00th=[18220], 60.00th=[18482], 00:36:44.800 | 70.00th=[19006], 80.00th=[19530], 90.00th=[20055], 95.00th=[20841], 00:36:44.800 | 99.00th=[22414], 99.50th=[22676], 99.90th=[24773], 99.95th=[25560], 00:36:44.800 | 99.99th=[25560] 00:36:44.800 bw ( KiB/s): min=20224, max=22016, per=32.65%, avg=21059.37, stdev=531.83, samples=19 00:36:44.800 iops : min= 158, max= 172, avg=164.53, stdev= 4.15, samples=19 00:36:44.800 lat (msec) : 10=0.18%, 20=88.17%, 50=11.65% 00:36:44.800 cpu : usr=92.22%, sys=6.22%, ctx=91, majf=0, minf=1637 00:36:44.800 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:44.800 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:44.800 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:44.800 issued rwts: total=1648,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:44.800 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:44.800 filename0: (groupid=0, jobs=1): err= 0: pid=110106: Mon Jul 22 18:43:55 2024 00:36:44.800 read: IOPS=143, BW=18.0MiB/s (18.8MB/s)(180MiB/10005msec) 00:36:44.800 slat (nsec): min=6870, max=71339, avg=22546.82, stdev=8268.33 00:36:44.800 clat (usec): min=5656, max=33047, avg=20835.62, stdev=2211.70 00:36:44.800 lat (usec): min=5681, max=33073, avg=20858.17, stdev=2213.49 00:36:44.800 clat percentiles (usec): 00:36:44.800 | 1.00th=[12387], 5.00th=[18482], 10.00th=[19006], 20.00th=[19530], 00:36:44.800 | 30.00th=[20055], 40.00th=[20317], 50.00th=[20579], 60.00th=[21103], 00:36:44.800 | 70.00th=[21365], 80.00th=[21890], 90.00th=[22676], 95.00th=[23725], 00:36:44.800 | 99.00th=[28967], 99.50th=[29754], 99.90th=[31851], 99.95th=[33162], 00:36:44.800 | 99.99th=[33162] 00:36:44.800 bw ( KiB/s): min=15616, max=20777, per=28.48%, avg=18370.05, stdev=1184.46, samples=20 00:36:44.800 iops : min= 122, max= 162, avg=143.50, stdev= 9.22, samples=20 00:36:44.800 lat (msec) : 10=0.14%, 20=29.62%, 50=70.24% 00:36:44.800 cpu : usr=92.87%, sys=5.64%, ctx=12, majf=0, minf=1635 00:36:44.800 IO depths : 1=11.5%, 2=88.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:44.800 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:44.800 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:44.800 issued rwts: total=1438,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:44.800 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:44.800 filename0: (groupid=0, jobs=1): err= 0: pid=110107: Mon Jul 22 18:43:55 2024 00:36:44.800 read: IOPS=195, BW=24.4MiB/s (25.6MB/s)(245MiB/10005msec) 00:36:44.800 slat (nsec): min=7173, max=79711, avg=22606.13, stdev=7172.31 00:36:44.801 clat (usec): min=11472, max=60231, avg=15315.66, stdev=2725.26 00:36:44.801 lat (usec): min=11512, max=60252, avg=15338.27, stdev=2725.42 00:36:44.801 clat percentiles (usec): 00:36:44.801 | 1.00th=[12256], 5.00th=[13173], 10.00th=[13566], 20.00th=[14091], 00:36:44.801 | 30.00th=[14484], 40.00th=[14746], 50.00th=[15139], 60.00th=[15401], 00:36:44.801 | 70.00th=[15795], 80.00th=[16057], 90.00th=[16581], 95.00th=[17433], 00:36:44.801 | 99.00th=[20579], 99.50th=[22152], 99.90th=[58459], 99.95th=[60031], 00:36:44.801 | 99.99th=[60031] 00:36:44.801 bw ( KiB/s): min=22272, max=26368, per=38.77%, avg=25011.20, stdev=1206.76, samples=20 00:36:44.801 iops : min= 174, max= 206, avg=195.40, stdev= 9.43, samples=20 00:36:44.801 lat (msec) : 20=98.42%, 50=1.28%, 100=0.31% 00:36:44.801 cpu : usr=92.29%, sys=6.14%, ctx=20, majf=0, minf=1637 00:36:44.801 IO depths : 1=0.8%, 2=99.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:44.801 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:44.801 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:44.801 issued rwts: total=1956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:44.801 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:44.801 00:36:44.801 Run status group 0 (all jobs): 00:36:44.801 READ: bw=63.0MiB/s (66.1MB/s), 18.0MiB/s-24.4MiB/s (18.8MB/s-25.6MB/s), io=630MiB (661MB), run=10003-10005msec 00:36:45.059 ----------------------------------------------------- 00:36:45.059 Suppressions used: 00:36:45.059 count bytes template 00:36:45.059 5 44 /usr/src/fio/parse.c 00:36:45.059 1 8 libtcmalloc_minimal.so 00:36:45.059 1 904 libcrypto.so 00:36:45.059 ----------------------------------------------------- 00:36:45.059 00:36:45.059 18:43:56 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:36:45.059 18:43:56 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:36:45.059 18:43:56 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:36:45.059 18:43:56 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:45.059 18:43:56 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:36:45.059 18:43:56 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:45.059 18:43:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:45.059 18:43:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:45.059 18:43:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:45.059 18:43:56 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:45.059 18:43:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:45.059 18:43:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:45.059 ************************************ 00:36:45.059 END TEST fio_dif_digest 00:36:45.059 ************************************ 00:36:45.059 18:43:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:45.059 00:36:45.059 real 0m12.766s 00:36:45.059 user 0m29.958s 00:36:45.059 sys 0m2.316s 00:36:45.059 18:43:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:45.059 18:43:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:45.059 18:43:56 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:36:45.059 18:43:56 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:36:45.059 18:43:56 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:36:45.059 18:43:56 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:36:45.059 18:43:56 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:36:45.059 18:43:57 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:36:45.059 18:43:57 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:36:45.059 18:43:57 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:45.059 18:43:57 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:36:45.059 rmmod nvme_tcp 00:36:45.059 rmmod nvme_fabrics 00:36:45.317 rmmod nvme_keyring 00:36:45.317 18:43:57 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:45.317 18:43:57 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:36:45.317 18:43:57 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:36:45.317 18:43:57 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 109339 ']' 00:36:45.317 18:43:57 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 109339 00:36:45.317 18:43:57 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 109339 ']' 00:36:45.317 18:43:57 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 109339 00:36:45.317 18:43:57 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:36:45.317 18:43:57 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:45.317 18:43:57 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 109339 00:36:45.317 killing process with pid 109339 00:36:45.317 18:43:57 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:36:45.317 18:43:57 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:36:45.317 18:43:57 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 109339' 00:36:45.317 18:43:57 nvmf_dif -- common/autotest_common.sh@967 -- # kill 109339 00:36:45.317 18:43:57 nvmf_dif -- common/autotest_common.sh@972 -- # wait 109339 00:36:46.731 18:43:58 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:36:46.732 18:43:58 nvmf_dif -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:36:46.732 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:36:46.990 Waiting for block devices as requested 00:36:46.990 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:36:46.990 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:36:46.990 18:43:58 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:36:46.990 18:43:58 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:36:46.990 18:43:58 nvmf_dif -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:46.990 18:43:58 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:36:46.990 18:43:58 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:46.990 18:43:58 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:46.990 18:43:58 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:47.249 18:43:59 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:36:47.249 00:36:47.249 real 1m11.192s 00:36:47.249 user 4m13.524s 00:36:47.249 sys 0m15.278s 00:36:47.249 18:43:59 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:47.249 ************************************ 00:36:47.249 END TEST nvmf_dif 00:36:47.249 ************************************ 00:36:47.249 18:43:59 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:47.249 18:43:59 -- common/autotest_common.sh@1142 -- # return 0 00:36:47.249 18:43:59 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:36:47.249 18:43:59 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:36:47.249 18:43:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:47.249 18:43:59 -- common/autotest_common.sh@10 -- # set +x 00:36:47.249 ************************************ 00:36:47.249 START TEST nvmf_abort_qd_sizes 00:36:47.249 ************************************ 00:36:47.249 18:43:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:36:47.249 * Looking for test storage... 00:36:47.249 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:36:47.249 18:43:59 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:36:47.249 18:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:36:47.249 18:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:47.249 18:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:47.249 18:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:47.249 18:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:47.249 18:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:47.249 18:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:47.249 18:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:47.249 18:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:47.249 18:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:47.249 18:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:47.249 18:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:36:47.249 18:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=0b8484e2-e129-4a11-8748-0b3c728771da 00:36:47.249 18:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:47.249 18:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:47.249 18:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:36:47.249 18:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:47.249 18:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:36:47.249 18:43:59 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:47.249 18:43:59 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:47.249 18:43:59 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:47.249 18:43:59 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:47.249 18:43:59 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:47.249 18:43:59 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:47.249 18:43:59 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:36:47.249 18:43:59 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:47.249 18:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:36:47.249 18:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:47.249 18:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:47.249 18:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:47.249 18:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:47.249 18:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:47.249 18:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:47.249 18:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:47.249 18:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:47.249 18:43:59 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:36:47.249 18:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:36:47.249 18:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:47.249 18:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:36:47.249 18:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:36:47.249 18:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:36:47.249 18:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:47.249 18:43:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:47.249 18:43:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:47.249 18:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:36:47.249 18:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:36:47.249 18:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:36:47.249 18:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:36:47.249 18:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:36:47.249 18:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # nvmf_veth_init 00:36:47.249 18:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:47.249 18:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:47.249 18:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:36:47.249 18:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:36:47.249 18:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:36:47.249 18:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:36:47.249 18:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:36:47.249 18:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:47.249 18:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:36:47.249 18:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:36:47.249 18:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:36:47.249 18:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:36:47.249 18:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:36:47.249 18:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:36:47.249 Cannot find device "nvmf_tgt_br" 00:36:47.249 18:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # true 00:36:47.250 18:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:36:47.250 Cannot find device "nvmf_tgt_br2" 00:36:47.250 18:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # true 00:36:47.250 18:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:36:47.250 18:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:36:47.508 Cannot find device "nvmf_tgt_br" 00:36:47.508 18:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # true 00:36:47.508 18:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:36:47.508 Cannot find device "nvmf_tgt_br2" 00:36:47.508 18:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # true 00:36:47.508 18:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:36:47.508 18:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:36:47.508 18:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:36:47.508 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:36:47.508 18:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:36:47.508 18:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:36:47.508 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:36:47.508 18:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:36:47.508 18:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:36:47.508 18:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:36:47.508 18:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:36:47.508 18:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:36:47.508 18:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:36:47.508 18:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:36:47.508 18:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:36:47.508 18:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:36:47.508 18:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:36:47.508 18:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:36:47.508 18:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:36:47.508 18:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:36:47.508 18:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:36:47.508 18:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:36:47.508 18:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:36:47.508 18:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:36:47.508 18:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:36:47.508 18:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:36:47.508 18:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:36:47.508 18:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:36:47.508 18:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:36:47.767 18:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:36:47.767 18:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:36:47.767 18:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:36:47.767 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:47.767 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.095 ms 00:36:47.767 00:36:47.767 --- 10.0.0.2 ping statistics --- 00:36:47.767 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:47.767 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:36:47.767 18:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:36:47.767 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:36:47.767 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:36:47.767 00:36:47.767 --- 10.0.0.3 ping statistics --- 00:36:47.767 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:47.767 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:36:47.767 18:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:36:47.767 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:47.767 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:36:47.767 00:36:47.767 --- 10.0.0.1 ping statistics --- 00:36:47.767 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:47.767 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:36:47.767 18:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:47.767 18:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@433 -- # return 0 00:36:47.767 18:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:36:47.767 18:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:36:48.334 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:36:48.334 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:36:48.618 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:36:48.618 18:44:00 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:48.618 18:44:00 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:36:48.618 18:44:00 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:36:48.618 18:44:00 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:48.618 18:44:00 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:36:48.618 18:44:00 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:36:48.618 18:44:00 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:36:48.618 18:44:00 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:36:48.618 18:44:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:36:48.618 18:44:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:48.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:48.618 18:44:00 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=110716 00:36:48.618 18:44:00 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 110716 00:36:48.618 18:44:00 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:36:48.618 18:44:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 110716 ']' 00:36:48.618 18:44:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:48.618 18:44:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:48.618 18:44:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:48.618 18:44:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:48.618 18:44:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:48.618 [2024-07-22 18:44:00.591941] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:36:48.618 [2024-07-22 18:44:00.592129] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:48.876 [2024-07-22 18:44:00.769819] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:49.135 [2024-07-22 18:44:01.061796] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:49.135 [2024-07-22 18:44:01.061929] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:49.135 [2024-07-22 18:44:01.061950] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:49.135 [2024-07-22 18:44:01.061966] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:49.135 [2024-07-22 18:44:01.061979] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:49.135 [2024-07-22 18:44:01.062202] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:49.135 [2024-07-22 18:44:01.063060] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:36:49.135 [2024-07-22 18:44:01.063149] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:49.135 [2024-07-22 18:44:01.063167] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:36:49.701 18:44:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:49.701 18:44:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:36:49.701 18:44:01 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:36:49.701 18:44:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:36:49.701 18:44:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:49.701 18:44:01 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:49.701 18:44:01 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:36:49.701 18:44:01 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:36:49.701 18:44:01 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:36:49.701 18:44:01 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:36:49.701 18:44:01 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:36:49.701 18:44:01 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n '' ]] 00:36:49.701 18:44:01 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:36:49.701 18:44:01 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:36:49.701 18:44:01 nvmf_abort_qd_sizes -- scripts/common.sh@295 -- # local bdf= 00:36:49.701 18:44:01 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:36:49.701 18:44:01 nvmf_abort_qd_sizes -- scripts/common.sh@230 -- # local class 00:36:49.701 18:44:01 nvmf_abort_qd_sizes -- scripts/common.sh@231 -- # local subclass 00:36:49.701 18:44:01 nvmf_abort_qd_sizes -- scripts/common.sh@232 -- # local progif 00:36:49.701 18:44:01 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # printf %02x 1 00:36:49.702 18:44:01 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # class=01 00:36:49.702 18:44:01 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # printf %02x 8 00:36:49.702 18:44:01 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # subclass=08 00:36:49.702 18:44:01 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # printf %02x 2 00:36:49.702 18:44:01 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # progif=02 00:36:49.702 18:44:01 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # hash lspci 00:36:49.702 18:44:01 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:36:49.702 18:44:01 nvmf_abort_qd_sizes -- scripts/common.sh@239 -- # lspci -mm -n -D 00:36:49.702 18:44:01 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:36:49.702 18:44:01 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # grep -i -- -p02 00:36:49.702 18:44:01 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # tr -d '"' 00:36:49.702 18:44:01 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:36:49.702 18:44:01 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:36:49.702 18:44:01 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 00:36:49.702 18:44:01 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:36:49.702 18:44:01 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 00:36:49.702 18:44:01 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 00:36:49.702 18:44:01 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:36:49.702 18:44:01 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:36:49.702 18:44:01 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:36:49.702 18:44:01 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 00:36:49.702 18:44:01 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:36:49.702 18:44:01 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 00:36:49.702 18:44:01 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 00:36:49.702 18:44:01 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:36:49.702 18:44:01 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:36:49.702 18:44:01 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:36:49.702 18:44:01 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:36:49.702 18:44:01 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:36:49.702 18:44:01 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:36:49.702 18:44:01 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:36:49.702 18:44:01 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:36:49.702 18:44:01 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:36:49.702 18:44:01 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:36:49.702 18:44:01 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:36:49.702 18:44:01 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 2 )) 00:36:49.702 18:44:01 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:36:49.702 18:44:01 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:36:49.702 18:44:01 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:36:49.702 18:44:01 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:36:49.702 18:44:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:36:49.702 18:44:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:49.702 18:44:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:49.702 ************************************ 00:36:49.702 START TEST spdk_target_abort 00:36:49.702 ************************************ 00:36:49.702 18:44:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:36:49.702 18:44:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:36:49.702 18:44:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:36:49.702 18:44:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:49.702 18:44:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:49.702 spdk_targetn1 00:36:49.702 18:44:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:49.702 18:44:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:49.702 18:44:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:49.702 18:44:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:49.961 [2024-07-22 18:44:01.719038] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:49.961 18:44:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:49.961 18:44:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:36:49.961 18:44:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:49.961 18:44:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:49.961 18:44:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:49.961 18:44:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:36:49.961 18:44:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:49.961 18:44:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:49.961 18:44:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:49.961 18:44:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:36:49.961 18:44:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:49.961 18:44:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:49.961 [2024-07-22 18:44:01.757201] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:49.961 18:44:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:49.961 18:44:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:36:49.961 18:44:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:36:49.961 18:44:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:36:49.961 18:44:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:36:49.961 18:44:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:36:49.961 18:44:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:36:49.961 18:44:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:36:49.961 18:44:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:36:49.961 18:44:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:36:49.961 18:44:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:49.961 18:44:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:36:49.961 18:44:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:49.961 18:44:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:36:49.961 18:44:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:49.961 18:44:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:36:49.961 18:44:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:49.961 18:44:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:49.961 18:44:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:49.961 18:44:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:49.961 18:44:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:49.961 18:44:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:53.246 Initializing NVMe Controllers 00:36:53.246 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:53.246 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:53.246 Initialization complete. Launching workers. 00:36:53.246 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 7513, failed: 0 00:36:53.246 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1044, failed to submit 6469 00:36:53.246 success 841, unsuccess 203, failed 0 00:36:53.246 18:44:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:53.246 18:44:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:57.459 Initializing NVMe Controllers 00:36:57.459 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:57.459 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:57.459 Initialization complete. Launching workers. 00:36:57.459 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 6028, failed: 0 00:36:57.459 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1305, failed to submit 4723 00:36:57.459 success 255, unsuccess 1050, failed 0 00:36:57.459 18:44:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:57.459 18:44:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:00.034 Initializing NVMe Controllers 00:37:00.034 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:37:00.034 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:00.034 Initialization complete. Launching workers. 00:37:00.034 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 25451, failed: 0 00:37:00.034 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2500, failed to submit 22951 00:37:00.034 success 108, unsuccess 2392, failed 0 00:37:00.034 18:44:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:37:00.034 18:44:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:00.034 18:44:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:00.034 18:44:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:00.034 18:44:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:37:00.034 18:44:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:00.034 18:44:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:00.602 18:44:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:00.602 18:44:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 110716 00:37:00.602 18:44:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 110716 ']' 00:37:00.602 18:44:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 110716 00:37:00.602 18:44:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:37:00.602 18:44:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:00.602 18:44:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 110716 00:37:00.602 killing process with pid 110716 00:37:00.602 18:44:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:37:00.602 18:44:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:37:00.602 18:44:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 110716' 00:37:00.602 18:44:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 110716 00:37:00.602 18:44:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 110716 00:37:01.977 ************************************ 00:37:01.977 END TEST spdk_target_abort 00:37:01.977 ************************************ 00:37:01.977 00:37:01.977 real 0m11.997s 00:37:01.977 user 0m46.468s 00:37:01.977 sys 0m1.969s 00:37:01.977 18:44:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:01.977 18:44:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:01.977 18:44:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:37:01.977 18:44:13 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:37:01.977 18:44:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:37:01.977 18:44:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:01.977 18:44:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:01.977 ************************************ 00:37:01.977 START TEST kernel_target_abort 00:37:01.977 ************************************ 00:37:01.977 18:44:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:37:01.977 18:44:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:37:01.978 18:44:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:37:01.978 18:44:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:01.978 18:44:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:01.978 18:44:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:01.978 18:44:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:01.978 18:44:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:01.978 18:44:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:01.978 18:44:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:01.978 18:44:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:01.978 18:44:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:01.978 18:44:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:37:01.978 18:44:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:37:01.978 18:44:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:37:01.978 18:44:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:01.978 18:44:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:37:01.978 18:44:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:37:01.978 18:44:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:37:01.978 18:44:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:37:01.978 18:44:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:37:01.978 18:44:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:37:01.978 18:44:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:37:02.236 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:37:02.236 Waiting for block devices as requested 00:37:02.236 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:37:02.236 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:37:03.199 18:44:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:37:03.199 18:44:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:37:03.199 18:44:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:37:03.199 18:44:14 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:37:03.199 18:44:14 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:37:03.199 18:44:14 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:37:03.199 18:44:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:37:03.199 18:44:14 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:37:03.199 18:44:14 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:37:03.199 No valid GPT data, bailing 00:37:03.199 18:44:14 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:37:03.199 18:44:14 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:37:03.199 18:44:14 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:37:03.199 18:44:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:37:03.199 18:44:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:37:03.199 18:44:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:37:03.199 18:44:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:37:03.199 18:44:14 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:37:03.199 18:44:14 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:37:03.199 18:44:14 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:37:03.199 18:44:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:37:03.199 18:44:14 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:37:03.199 18:44:14 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:37:03.199 No valid GPT data, bailing 00:37:03.199 18:44:15 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:37:03.199 18:44:15 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:37:03.199 18:44:15 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:37:03.199 18:44:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:37:03.199 18:44:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:37:03.199 18:44:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:37:03.199 18:44:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:37:03.199 18:44:15 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:37:03.199 18:44:15 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:37:03.199 18:44:15 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:37:03.199 18:44:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:37:03.199 18:44:15 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:37:03.199 18:44:15 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:37:03.199 No valid GPT data, bailing 00:37:03.199 18:44:15 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:37:03.199 18:44:15 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:37:03.199 18:44:15 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:37:03.199 18:44:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:37:03.199 18:44:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:37:03.199 18:44:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:37:03.199 18:44:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:37:03.199 18:44:15 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:37:03.199 18:44:15 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:37:03.199 18:44:15 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:37:03.199 18:44:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:37:03.199 18:44:15 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:37:03.199 18:44:15 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:37:03.199 No valid GPT data, bailing 00:37:03.199 18:44:15 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:37:03.199 18:44:15 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:37:03.199 18:44:15 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:37:03.199 18:44:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:37:03.199 18:44:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:37:03.199 18:44:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:03.199 18:44:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:37:03.199 18:44:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:37:03.458 18:44:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:37:03.458 18:44:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:37:03.458 18:44:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:37:03.458 18:44:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:37:03.458 18:44:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:37:03.458 18:44:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:37:03.458 18:44:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:37:03.458 18:44:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:37:03.458 18:44:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:37:03.458 18:44:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da --hostid=0b8484e2-e129-4a11-8748-0b3c728771da -a 10.0.0.1 -t tcp -s 4420 00:37:03.458 00:37:03.458 Discovery Log Number of Records 2, Generation counter 2 00:37:03.458 =====Discovery Log Entry 0====== 00:37:03.458 trtype: tcp 00:37:03.458 adrfam: ipv4 00:37:03.458 subtype: current discovery subsystem 00:37:03.458 treq: not specified, sq flow control disable supported 00:37:03.458 portid: 1 00:37:03.458 trsvcid: 4420 00:37:03.458 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:37:03.458 traddr: 10.0.0.1 00:37:03.458 eflags: none 00:37:03.458 sectype: none 00:37:03.458 =====Discovery Log Entry 1====== 00:37:03.458 trtype: tcp 00:37:03.458 adrfam: ipv4 00:37:03.458 subtype: nvme subsystem 00:37:03.458 treq: not specified, sq flow control disable supported 00:37:03.458 portid: 1 00:37:03.458 trsvcid: 4420 00:37:03.458 subnqn: nqn.2016-06.io.spdk:testnqn 00:37:03.458 traddr: 10.0.0.1 00:37:03.458 eflags: none 00:37:03.458 sectype: none 00:37:03.458 18:44:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:37:03.458 18:44:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:37:03.458 18:44:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:37:03.458 18:44:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:37:03.458 18:44:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:37:03.458 18:44:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:37:03.458 18:44:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:37:03.458 18:44:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:37:03.458 18:44:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:37:03.458 18:44:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:03.458 18:44:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:37:03.458 18:44:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:03.458 18:44:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:37:03.458 18:44:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:03.458 18:44:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:37:03.458 18:44:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:03.458 18:44:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:37:03.458 18:44:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:03.458 18:44:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:03.458 18:44:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:03.458 18:44:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:06.771 Initializing NVMe Controllers 00:37:06.771 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:37:06.771 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:06.771 Initialization complete. Launching workers. 00:37:06.771 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 22872, failed: 0 00:37:06.771 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 22872, failed to submit 0 00:37:06.771 success 0, unsuccess 22872, failed 0 00:37:06.771 18:44:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:06.771 18:44:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:10.063 Initializing NVMe Controllers 00:37:10.063 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:37:10.063 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:10.063 Initialization complete. Launching workers. 00:37:10.063 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 53579, failed: 0 00:37:10.063 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 22997, failed to submit 30582 00:37:10.063 success 0, unsuccess 22997, failed 0 00:37:10.063 18:44:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:10.063 18:44:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:13.349 Initializing NVMe Controllers 00:37:13.349 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:37:13.349 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:13.349 Initialization complete. Launching workers. 00:37:13.349 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 62202, failed: 0 00:37:13.349 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 15538, failed to submit 46664 00:37:13.349 success 0, unsuccess 15538, failed 0 00:37:13.349 18:44:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:37:13.349 18:44:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:37:13.349 18:44:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:37:13.349 18:44:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:13.349 18:44:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:37:13.349 18:44:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:37:13.349 18:44:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:13.349 18:44:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:37:13.349 18:44:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:37:13.349 18:44:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:37:13.917 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:37:14.852 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:37:14.852 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:37:14.852 00:37:14.852 real 0m13.109s 00:37:14.852 user 0m7.206s 00:37:14.852 sys 0m3.725s 00:37:14.852 18:44:26 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:14.852 ************************************ 00:37:14.852 END TEST kernel_target_abort 00:37:14.852 ************************************ 00:37:14.852 18:44:26 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:14.852 18:44:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:37:14.852 18:44:26 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:37:14.852 18:44:26 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:37:14.852 18:44:26 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:37:14.852 18:44:26 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:37:14.852 18:44:26 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:37:14.852 18:44:26 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:37:14.852 18:44:26 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:37:14.852 18:44:26 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:37:14.852 rmmod nvme_tcp 00:37:15.110 rmmod nvme_fabrics 00:37:15.110 rmmod nvme_keyring 00:37:15.110 18:44:26 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:37:15.110 18:44:26 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:37:15.110 18:44:26 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:37:15.110 18:44:26 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 110716 ']' 00:37:15.110 18:44:26 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 110716 00:37:15.110 18:44:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 110716 ']' 00:37:15.110 18:44:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 110716 00:37:15.110 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (110716) - No such process 00:37:15.110 Process with pid 110716 is not found 00:37:15.110 18:44:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 110716 is not found' 00:37:15.110 18:44:26 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:37:15.110 18:44:26 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:37:15.369 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:37:15.369 Waiting for block devices as requested 00:37:15.369 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:37:15.627 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:37:15.627 18:44:27 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:37:15.627 18:44:27 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:37:15.627 18:44:27 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:37:15.627 18:44:27 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:37:15.627 18:44:27 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:15.627 18:44:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:15.627 18:44:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:15.627 18:44:27 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:37:15.627 00:37:15.627 real 0m28.466s 00:37:15.627 user 0m54.858s 00:37:15.627 sys 0m7.176s 00:37:15.627 ************************************ 00:37:15.627 END TEST nvmf_abort_qd_sizes 00:37:15.627 ************************************ 00:37:15.627 18:44:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:15.627 18:44:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:15.627 18:44:27 -- common/autotest_common.sh@1142 -- # return 0 00:37:15.628 18:44:27 -- spdk/autotest.sh@295 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:37:15.628 18:44:27 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:37:15.628 18:44:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:15.628 18:44:27 -- common/autotest_common.sh@10 -- # set +x 00:37:15.628 ************************************ 00:37:15.628 START TEST keyring_file 00:37:15.628 ************************************ 00:37:15.628 18:44:27 keyring_file -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:37:15.886 * Looking for test storage... 00:37:15.886 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:37:15.886 18:44:27 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:37:15.886 18:44:27 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:37:15.886 18:44:27 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:37:15.886 18:44:27 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:15.886 18:44:27 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:15.886 18:44:27 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:15.886 18:44:27 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:15.886 18:44:27 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:15.886 18:44:27 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:15.886 18:44:27 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:15.886 18:44:27 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:15.886 18:44:27 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:15.886 18:44:27 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:15.886 18:44:27 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:37:15.886 18:44:27 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=0b8484e2-e129-4a11-8748-0b3c728771da 00:37:15.886 18:44:27 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:15.886 18:44:27 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:15.886 18:44:27 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:37:15.886 18:44:27 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:15.886 18:44:27 keyring_file -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:37:15.886 18:44:27 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:15.886 18:44:27 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:15.886 18:44:27 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:15.886 18:44:27 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:15.886 18:44:27 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:15.886 18:44:27 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:15.886 18:44:27 keyring_file -- paths/export.sh@5 -- # export PATH 00:37:15.886 18:44:27 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:15.886 18:44:27 keyring_file -- nvmf/common.sh@47 -- # : 0 00:37:15.886 18:44:27 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:37:15.886 18:44:27 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:37:15.886 18:44:27 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:15.886 18:44:27 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:15.886 18:44:27 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:15.886 18:44:27 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:37:15.886 18:44:27 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:37:15.886 18:44:27 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:37:15.886 18:44:27 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:37:15.886 18:44:27 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:37:15.886 18:44:27 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:37:15.886 18:44:27 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:37:15.886 18:44:27 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:37:15.886 18:44:27 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:37:15.886 18:44:27 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:37:15.886 18:44:27 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:37:15.886 18:44:27 keyring_file -- keyring/common.sh@17 -- # name=key0 00:37:15.886 18:44:27 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:37:15.886 18:44:27 keyring_file -- keyring/common.sh@17 -- # digest=0 00:37:15.886 18:44:27 keyring_file -- keyring/common.sh@18 -- # mktemp 00:37:15.886 18:44:27 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.tUL3HQHlR0 00:37:15.886 18:44:27 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:37:15.886 18:44:27 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:37:15.886 18:44:27 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:37:15.886 18:44:27 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:37:15.886 18:44:27 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:37:15.886 18:44:27 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:37:15.886 18:44:27 keyring_file -- nvmf/common.sh@705 -- # python - 00:37:15.886 18:44:27 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.tUL3HQHlR0 00:37:15.886 18:44:27 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.tUL3HQHlR0 00:37:15.886 18:44:27 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.tUL3HQHlR0 00:37:15.886 18:44:27 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:37:15.886 18:44:27 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:37:15.886 18:44:27 keyring_file -- keyring/common.sh@17 -- # name=key1 00:37:15.887 18:44:27 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:37:15.887 18:44:27 keyring_file -- keyring/common.sh@17 -- # digest=0 00:37:15.887 18:44:27 keyring_file -- keyring/common.sh@18 -- # mktemp 00:37:15.887 18:44:27 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.bvVeiIwjau 00:37:15.887 18:44:27 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:37:15.887 18:44:27 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:37:15.887 18:44:27 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:37:15.887 18:44:27 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:37:15.887 18:44:27 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:37:15.887 18:44:27 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:37:15.887 18:44:27 keyring_file -- nvmf/common.sh@705 -- # python - 00:37:15.887 18:44:27 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.bvVeiIwjau 00:37:15.887 18:44:27 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.bvVeiIwjau 00:37:15.887 18:44:27 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.bvVeiIwjau 00:37:15.887 18:44:27 keyring_file -- keyring/file.sh@30 -- # tgtpid=111817 00:37:15.887 18:44:27 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:37:15.887 18:44:27 keyring_file -- keyring/file.sh@32 -- # waitforlisten 111817 00:37:15.887 18:44:27 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 111817 ']' 00:37:15.887 18:44:27 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:15.887 18:44:27 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:15.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:15.887 18:44:27 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:15.887 18:44:27 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:15.887 18:44:27 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:16.145 [2024-07-22 18:44:27.996504] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:37:16.145 [2024-07-22 18:44:27.996696] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111817 ] 00:37:16.404 [2024-07-22 18:44:28.175546] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:16.662 [2024-07-22 18:44:28.487162] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:17.597 18:44:29 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:17.597 18:44:29 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:37:17.597 18:44:29 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:37:17.597 18:44:29 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:17.597 18:44:29 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:17.597 [2024-07-22 18:44:29.405865] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:17.597 null0 00:37:17.597 [2024-07-22 18:44:29.438201] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:37:17.597 [2024-07-22 18:44:29.438603] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:37:17.597 [2024-07-22 18:44:29.446192] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:37:17.597 18:44:29 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:17.597 18:44:29 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:37:17.597 18:44:29 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:37:17.597 18:44:29 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:37:17.597 18:44:29 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:37:17.597 18:44:29 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:17.597 18:44:29 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:37:17.597 18:44:29 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:17.597 18:44:29 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:37:17.597 18:44:29 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:17.597 18:44:29 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:17.597 [2024-07-22 18:44:29.462231] nvmf_rpc.c: 788:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:37:17.597 2024/07/22 18:44:29 error on JSON-RPC call, method: nvmf_subsystem_add_listener, params: map[listen_address:map[traddr:127.0.0.1 trsvcid:4420 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode0 secure_channel:%!s(bool=false)], err: error received for nvmf_subsystem_add_listener method, err: Code=-32602 Msg=Invalid parameters 00:37:17.597 request: 00:37:17.597 { 00:37:17.597 "method": "nvmf_subsystem_add_listener", 00:37:17.597 "params": { 00:37:17.597 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:37:17.597 "secure_channel": false, 00:37:17.597 "listen_address": { 00:37:17.597 "trtype": "tcp", 00:37:17.597 "traddr": "127.0.0.1", 00:37:17.597 "trsvcid": "4420" 00:37:17.597 } 00:37:17.597 } 00:37:17.597 } 00:37:17.597 Got JSON-RPC error response 00:37:17.597 GoRPCClient: error on JSON-RPC call 00:37:17.597 18:44:29 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:37:17.597 18:44:29 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:37:17.597 18:44:29 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:17.597 18:44:29 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:37:17.597 18:44:29 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:17.597 18:44:29 keyring_file -- keyring/file.sh@46 -- # bperfpid=111852 00:37:17.597 18:44:29 keyring_file -- keyring/file.sh@48 -- # waitforlisten 111852 /var/tmp/bperf.sock 00:37:17.597 18:44:29 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 111852 ']' 00:37:17.597 18:44:29 keyring_file -- keyring/file.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:37:17.597 18:44:29 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:17.597 18:44:29 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:17.597 18:44:29 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:17.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:17.597 18:44:29 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:17.597 18:44:29 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:17.597 [2024-07-22 18:44:29.588224] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:37:17.597 [2024-07-22 18:44:29.588453] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111852 ] 00:37:17.856 [2024-07-22 18:44:29.766285] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:18.424 [2024-07-22 18:44:30.133879] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:37:18.682 18:44:30 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:18.682 18:44:30 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:37:18.682 18:44:30 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.tUL3HQHlR0 00:37:18.682 18:44:30 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.tUL3HQHlR0 00:37:18.941 18:44:30 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.bvVeiIwjau 00:37:18.941 18:44:30 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.bvVeiIwjau 00:37:19.199 18:44:31 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:37:19.199 18:44:31 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:37:19.199 18:44:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:19.199 18:44:31 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:19.199 18:44:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:19.766 18:44:31 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.tUL3HQHlR0 == \/\t\m\p\/\t\m\p\.\t\U\L\3\H\Q\H\l\R\0 ]] 00:37:19.766 18:44:31 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:37:19.766 18:44:31 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:37:19.766 18:44:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:19.766 18:44:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:19.766 18:44:31 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:20.025 18:44:31 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.bvVeiIwjau == \/\t\m\p\/\t\m\p\.\b\v\V\e\i\I\w\j\a\u ]] 00:37:20.025 18:44:31 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:37:20.025 18:44:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:20.025 18:44:31 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:20.025 18:44:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:20.025 18:44:31 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:20.025 18:44:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:20.284 18:44:32 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:37:20.284 18:44:32 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:37:20.284 18:44:32 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:20.284 18:44:32 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:20.284 18:44:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:20.284 18:44:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:20.284 18:44:32 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:20.542 18:44:32 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:37:20.542 18:44:32 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:20.542 18:44:32 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:20.801 [2024-07-22 18:44:32.611056] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:20.801 nvme0n1 00:37:20.801 18:44:32 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:37:20.801 18:44:32 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:20.801 18:44:32 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:20.801 18:44:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:20.801 18:44:32 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:20.801 18:44:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:21.059 18:44:33 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:37:21.059 18:44:33 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:37:21.059 18:44:33 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:21.059 18:44:33 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:21.059 18:44:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:21.059 18:44:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:21.059 18:44:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:21.639 18:44:33 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:37:21.639 18:44:33 keyring_file -- keyring/file.sh@62 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:21.639 Running I/O for 1 seconds... 00:37:22.572 00:37:22.572 Latency(us) 00:37:22.572 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:22.572 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:37:22.572 nvme0n1 : 1.01 7358.03 28.74 0.00 0.00 17318.73 8281.37 32887.16 00:37:22.572 =================================================================================================================== 00:37:22.572 Total : 7358.03 28.74 0.00 0.00 17318.73 8281.37 32887.16 00:37:22.572 0 00:37:22.572 18:44:34 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:37:22.572 18:44:34 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:37:22.830 18:44:34 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:37:22.830 18:44:34 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:22.830 18:44:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:22.830 18:44:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:22.830 18:44:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:22.830 18:44:34 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:23.088 18:44:35 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:37:23.088 18:44:35 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:37:23.088 18:44:35 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:23.088 18:44:35 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:23.088 18:44:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:23.088 18:44:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:23.088 18:44:35 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:23.653 18:44:35 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:37:23.653 18:44:35 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:23.653 18:44:35 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:37:23.653 18:44:35 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:23.653 18:44:35 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:37:23.653 18:44:35 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:23.653 18:44:35 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:37:23.653 18:44:35 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:23.653 18:44:35 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:23.653 18:44:35 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:23.911 [2024-07-22 18:44:35.700571] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:37:23.911 [2024-07-22 18:44:35.701391] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000030000 (107): Transport endpoint is not connected 00:37:23.911 [2024-07-22 18:44:35.702343] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000030000 (9): Bad file descriptor 00:37:23.911 [2024-07-22 18:44:35.703335] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:37:23.911 [2024-07-22 18:44:35.703373] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:37:23.911 [2024-07-22 18:44:35.703393] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:37:23.911 2024/07/22 18:44:35 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key1 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:37:23.911 request: 00:37:23.911 { 00:37:23.911 "method": "bdev_nvme_attach_controller", 00:37:23.911 "params": { 00:37:23.911 "name": "nvme0", 00:37:23.911 "trtype": "tcp", 00:37:23.911 "traddr": "127.0.0.1", 00:37:23.911 "adrfam": "ipv4", 00:37:23.911 "trsvcid": "4420", 00:37:23.911 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:23.911 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:23.911 "prchk_reftag": false, 00:37:23.911 "prchk_guard": false, 00:37:23.911 "hdgst": false, 00:37:23.911 "ddgst": false, 00:37:23.911 "psk": "key1" 00:37:23.911 } 00:37:23.911 } 00:37:23.911 Got JSON-RPC error response 00:37:23.911 GoRPCClient: error on JSON-RPC call 00:37:23.911 18:44:35 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:37:23.911 18:44:35 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:23.911 18:44:35 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:37:23.911 18:44:35 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:23.911 18:44:35 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:37:23.911 18:44:35 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:23.911 18:44:35 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:23.911 18:44:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:23.911 18:44:35 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:23.911 18:44:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:24.169 18:44:35 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:37:24.169 18:44:35 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:37:24.169 18:44:35 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:24.169 18:44:35 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:24.169 18:44:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:24.169 18:44:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:24.169 18:44:35 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:24.426 18:44:36 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:37:24.426 18:44:36 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:37:24.426 18:44:36 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:37:24.682 18:44:36 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:37:24.682 18:44:36 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:37:24.939 18:44:36 keyring_file -- keyring/file.sh@77 -- # jq length 00:37:24.939 18:44:36 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:37:24.939 18:44:36 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:25.502 18:44:37 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:37:25.502 18:44:37 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.tUL3HQHlR0 00:37:25.502 18:44:37 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.tUL3HQHlR0 00:37:25.502 18:44:37 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:37:25.502 18:44:37 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.tUL3HQHlR0 00:37:25.502 18:44:37 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:37:25.502 18:44:37 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:25.502 18:44:37 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:37:25.502 18:44:37 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:25.502 18:44:37 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.tUL3HQHlR0 00:37:25.502 18:44:37 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.tUL3HQHlR0 00:37:25.760 [2024-07-22 18:44:37.565663] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.tUL3HQHlR0': 0100660 00:37:25.760 [2024-07-22 18:44:37.565751] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:37:25.760 2024/07/22 18:44:37 error on JSON-RPC call, method: keyring_file_add_key, params: map[name:key0 path:/tmp/tmp.tUL3HQHlR0], err: error received for keyring_file_add_key method, err: Code=-1 Msg=Operation not permitted 00:37:25.760 request: 00:37:25.760 { 00:37:25.760 "method": "keyring_file_add_key", 00:37:25.760 "params": { 00:37:25.760 "name": "key0", 00:37:25.760 "path": "/tmp/tmp.tUL3HQHlR0" 00:37:25.760 } 00:37:25.760 } 00:37:25.760 Got JSON-RPC error response 00:37:25.760 GoRPCClient: error on JSON-RPC call 00:37:25.760 18:44:37 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:37:25.760 18:44:37 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:25.760 18:44:37 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:37:25.760 18:44:37 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:25.760 18:44:37 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.tUL3HQHlR0 00:37:25.760 18:44:37 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.tUL3HQHlR0 00:37:25.760 18:44:37 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.tUL3HQHlR0 00:37:26.017 18:44:37 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.tUL3HQHlR0 00:37:26.017 18:44:37 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:37:26.017 18:44:37 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:26.017 18:44:37 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:26.017 18:44:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:26.017 18:44:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:26.017 18:44:37 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:26.275 18:44:38 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:37:26.275 18:44:38 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:26.275 18:44:38 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:37:26.275 18:44:38 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:26.275 18:44:38 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:37:26.275 18:44:38 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:26.275 18:44:38 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:37:26.275 18:44:38 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:26.275 18:44:38 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:26.275 18:44:38 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:26.532 [2024-07-22 18:44:38.490167] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.tUL3HQHlR0': No such file or directory 00:37:26.532 [2024-07-22 18:44:38.490280] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:37:26.532 [2024-07-22 18:44:38.490326] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:37:26.532 [2024-07-22 18:44:38.490345] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:37:26.532 [2024-07-22 18:44:38.490363] bdev_nvme.c:6268:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:37:26.532 2024/07/22 18:44:38 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-19 Msg=No such device 00:37:26.532 request: 00:37:26.532 { 00:37:26.532 "method": "bdev_nvme_attach_controller", 00:37:26.532 "params": { 00:37:26.532 "name": "nvme0", 00:37:26.532 "trtype": "tcp", 00:37:26.532 "traddr": "127.0.0.1", 00:37:26.532 "adrfam": "ipv4", 00:37:26.532 "trsvcid": "4420", 00:37:26.532 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:26.532 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:26.532 "prchk_reftag": false, 00:37:26.532 "prchk_guard": false, 00:37:26.532 "hdgst": false, 00:37:26.532 "ddgst": false, 00:37:26.532 "psk": "key0" 00:37:26.532 } 00:37:26.532 } 00:37:26.532 Got JSON-RPC error response 00:37:26.532 GoRPCClient: error on JSON-RPC call 00:37:26.532 18:44:38 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:37:26.532 18:44:38 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:26.532 18:44:38 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:37:26.532 18:44:38 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:26.532 18:44:38 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:37:26.532 18:44:38 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:37:26.790 18:44:38 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:37:26.790 18:44:38 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:37:26.790 18:44:38 keyring_file -- keyring/common.sh@17 -- # name=key0 00:37:26.790 18:44:38 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:37:26.790 18:44:38 keyring_file -- keyring/common.sh@17 -- # digest=0 00:37:26.790 18:44:38 keyring_file -- keyring/common.sh@18 -- # mktemp 00:37:26.790 18:44:38 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.ZkWxH40u21 00:37:26.790 18:44:38 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:37:26.790 18:44:38 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:37:26.790 18:44:38 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:37:26.790 18:44:38 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:37:26.790 18:44:38 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:37:26.790 18:44:38 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:37:26.790 18:44:38 keyring_file -- nvmf/common.sh@705 -- # python - 00:37:27.047 18:44:38 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.ZkWxH40u21 00:37:27.047 18:44:38 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.ZkWxH40u21 00:37:27.047 18:44:38 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.ZkWxH40u21 00:37:27.047 18:44:38 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.ZkWxH40u21 00:37:27.047 18:44:38 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.ZkWxH40u21 00:37:27.304 18:44:39 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:27.304 18:44:39 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:27.562 nvme0n1 00:37:27.562 18:44:39 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:37:27.562 18:44:39 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:27.562 18:44:39 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:27.562 18:44:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:27.562 18:44:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:27.562 18:44:39 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:27.821 18:44:39 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:37:27.821 18:44:39 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:37:27.821 18:44:39 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:37:28.079 18:44:39 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:37:28.079 18:44:39 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:37:28.079 18:44:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:28.079 18:44:39 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:28.079 18:44:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:28.336 18:44:40 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:37:28.336 18:44:40 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:37:28.336 18:44:40 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:28.336 18:44:40 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:28.336 18:44:40 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:28.336 18:44:40 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:28.336 18:44:40 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:28.595 18:44:40 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:37:28.595 18:44:40 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:37:28.595 18:44:40 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:37:28.852 18:44:40 keyring_file -- keyring/file.sh@104 -- # jq length 00:37:28.852 18:44:40 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:37:28.852 18:44:40 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:29.110 18:44:41 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:37:29.110 18:44:41 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.ZkWxH40u21 00:37:29.110 18:44:41 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.ZkWxH40u21 00:37:29.367 18:44:41 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.bvVeiIwjau 00:37:29.367 18:44:41 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.bvVeiIwjau 00:37:29.625 18:44:41 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:29.625 18:44:41 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:30.190 nvme0n1 00:37:30.190 18:44:42 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:37:30.190 18:44:42 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:37:30.449 18:44:42 keyring_file -- keyring/file.sh@112 -- # config='{ 00:37:30.449 "subsystems": [ 00:37:30.449 { 00:37:30.449 "subsystem": "keyring", 00:37:30.449 "config": [ 00:37:30.449 { 00:37:30.449 "method": "keyring_file_add_key", 00:37:30.449 "params": { 00:37:30.449 "name": "key0", 00:37:30.449 "path": "/tmp/tmp.ZkWxH40u21" 00:37:30.449 } 00:37:30.449 }, 00:37:30.449 { 00:37:30.449 "method": "keyring_file_add_key", 00:37:30.449 "params": { 00:37:30.449 "name": "key1", 00:37:30.449 "path": "/tmp/tmp.bvVeiIwjau" 00:37:30.449 } 00:37:30.449 } 00:37:30.449 ] 00:37:30.449 }, 00:37:30.449 { 00:37:30.449 "subsystem": "iobuf", 00:37:30.449 "config": [ 00:37:30.449 { 00:37:30.449 "method": "iobuf_set_options", 00:37:30.449 "params": { 00:37:30.449 "large_bufsize": 135168, 00:37:30.449 "large_pool_count": 1024, 00:37:30.449 "small_bufsize": 8192, 00:37:30.449 "small_pool_count": 8192 00:37:30.449 } 00:37:30.449 } 00:37:30.449 ] 00:37:30.449 }, 00:37:30.449 { 00:37:30.449 "subsystem": "sock", 00:37:30.449 "config": [ 00:37:30.449 { 00:37:30.449 "method": "sock_set_default_impl", 00:37:30.449 "params": { 00:37:30.449 "impl_name": "posix" 00:37:30.449 } 00:37:30.449 }, 00:37:30.449 { 00:37:30.449 "method": "sock_impl_set_options", 00:37:30.449 "params": { 00:37:30.449 "enable_ktls": false, 00:37:30.449 "enable_placement_id": 0, 00:37:30.449 "enable_quickack": false, 00:37:30.449 "enable_recv_pipe": true, 00:37:30.449 "enable_zerocopy_send_client": false, 00:37:30.449 "enable_zerocopy_send_server": true, 00:37:30.449 "impl_name": "ssl", 00:37:30.449 "recv_buf_size": 4096, 00:37:30.449 "send_buf_size": 4096, 00:37:30.449 "tls_version": 0, 00:37:30.449 "zerocopy_threshold": 0 00:37:30.449 } 00:37:30.449 }, 00:37:30.449 { 00:37:30.449 "method": "sock_impl_set_options", 00:37:30.449 "params": { 00:37:30.449 "enable_ktls": false, 00:37:30.449 "enable_placement_id": 0, 00:37:30.449 "enable_quickack": false, 00:37:30.449 "enable_recv_pipe": true, 00:37:30.449 "enable_zerocopy_send_client": false, 00:37:30.449 "enable_zerocopy_send_server": true, 00:37:30.449 "impl_name": "posix", 00:37:30.449 "recv_buf_size": 2097152, 00:37:30.449 "send_buf_size": 2097152, 00:37:30.449 "tls_version": 0, 00:37:30.449 "zerocopy_threshold": 0 00:37:30.449 } 00:37:30.449 } 00:37:30.449 ] 00:37:30.449 }, 00:37:30.449 { 00:37:30.449 "subsystem": "vmd", 00:37:30.449 "config": [] 00:37:30.449 }, 00:37:30.449 { 00:37:30.449 "subsystem": "accel", 00:37:30.449 "config": [ 00:37:30.449 { 00:37:30.449 "method": "accel_set_options", 00:37:30.449 "params": { 00:37:30.449 "buf_count": 2048, 00:37:30.449 "large_cache_size": 16, 00:37:30.449 "sequence_count": 2048, 00:37:30.449 "small_cache_size": 128, 00:37:30.449 "task_count": 2048 00:37:30.449 } 00:37:30.449 } 00:37:30.449 ] 00:37:30.449 }, 00:37:30.449 { 00:37:30.449 "subsystem": "bdev", 00:37:30.449 "config": [ 00:37:30.449 { 00:37:30.449 "method": "bdev_set_options", 00:37:30.449 "params": { 00:37:30.449 "bdev_auto_examine": true, 00:37:30.449 "bdev_io_cache_size": 256, 00:37:30.449 "bdev_io_pool_size": 65535, 00:37:30.449 "iobuf_large_cache_size": 16, 00:37:30.449 "iobuf_small_cache_size": 128 00:37:30.449 } 00:37:30.449 }, 00:37:30.449 { 00:37:30.449 "method": "bdev_raid_set_options", 00:37:30.449 "params": { 00:37:30.449 "process_max_bandwidth_mb_sec": 0, 00:37:30.449 "process_window_size_kb": 1024 00:37:30.449 } 00:37:30.449 }, 00:37:30.449 { 00:37:30.449 "method": "bdev_iscsi_set_options", 00:37:30.449 "params": { 00:37:30.449 "timeout_sec": 30 00:37:30.449 } 00:37:30.449 }, 00:37:30.449 { 00:37:30.449 "method": "bdev_nvme_set_options", 00:37:30.449 "params": { 00:37:30.449 "action_on_timeout": "none", 00:37:30.449 "allow_accel_sequence": false, 00:37:30.449 "arbitration_burst": 0, 00:37:30.449 "bdev_retry_count": 3, 00:37:30.449 "ctrlr_loss_timeout_sec": 0, 00:37:30.449 "delay_cmd_submit": true, 00:37:30.449 "dhchap_dhgroups": [ 00:37:30.449 "null", 00:37:30.449 "ffdhe2048", 00:37:30.449 "ffdhe3072", 00:37:30.449 "ffdhe4096", 00:37:30.449 "ffdhe6144", 00:37:30.449 "ffdhe8192" 00:37:30.449 ], 00:37:30.449 "dhchap_digests": [ 00:37:30.449 "sha256", 00:37:30.449 "sha384", 00:37:30.449 "sha512" 00:37:30.449 ], 00:37:30.449 "disable_auto_failback": false, 00:37:30.449 "fast_io_fail_timeout_sec": 0, 00:37:30.449 "generate_uuids": false, 00:37:30.449 "high_priority_weight": 0, 00:37:30.449 "io_path_stat": false, 00:37:30.449 "io_queue_requests": 512, 00:37:30.449 "keep_alive_timeout_ms": 10000, 00:37:30.449 "low_priority_weight": 0, 00:37:30.449 "medium_priority_weight": 0, 00:37:30.449 "nvme_adminq_poll_period_us": 10000, 00:37:30.449 "nvme_error_stat": false, 00:37:30.449 "nvme_ioq_poll_period_us": 0, 00:37:30.449 "rdma_cm_event_timeout_ms": 0, 00:37:30.449 "rdma_max_cq_size": 0, 00:37:30.449 "rdma_srq_size": 0, 00:37:30.449 "reconnect_delay_sec": 0, 00:37:30.449 "timeout_admin_us": 0, 00:37:30.449 "timeout_us": 0, 00:37:30.449 "transport_ack_timeout": 0, 00:37:30.449 "transport_retry_count": 4, 00:37:30.449 "transport_tos": 0 00:37:30.449 } 00:37:30.449 }, 00:37:30.449 { 00:37:30.449 "method": "bdev_nvme_attach_controller", 00:37:30.449 "params": { 00:37:30.449 "adrfam": "IPv4", 00:37:30.449 "ctrlr_loss_timeout_sec": 0, 00:37:30.449 "ddgst": false, 00:37:30.449 "fast_io_fail_timeout_sec": 0, 00:37:30.449 "hdgst": false, 00:37:30.449 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:30.450 "name": "nvme0", 00:37:30.450 "prchk_guard": false, 00:37:30.450 "prchk_reftag": false, 00:37:30.450 "psk": "key0", 00:37:30.450 "reconnect_delay_sec": 0, 00:37:30.450 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:30.450 "traddr": "127.0.0.1", 00:37:30.450 "trsvcid": "4420", 00:37:30.450 "trtype": "TCP" 00:37:30.450 } 00:37:30.450 }, 00:37:30.450 { 00:37:30.450 "method": "bdev_nvme_set_hotplug", 00:37:30.450 "params": { 00:37:30.450 "enable": false, 00:37:30.450 "period_us": 100000 00:37:30.450 } 00:37:30.450 }, 00:37:30.450 { 00:37:30.450 "method": "bdev_wait_for_examine" 00:37:30.450 } 00:37:30.450 ] 00:37:30.450 }, 00:37:30.450 { 00:37:30.450 "subsystem": "nbd", 00:37:30.450 "config": [] 00:37:30.450 } 00:37:30.450 ] 00:37:30.450 }' 00:37:30.450 18:44:42 keyring_file -- keyring/file.sh@114 -- # killprocess 111852 00:37:30.450 18:44:42 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 111852 ']' 00:37:30.450 18:44:42 keyring_file -- common/autotest_common.sh@952 -- # kill -0 111852 00:37:30.450 18:44:42 keyring_file -- common/autotest_common.sh@953 -- # uname 00:37:30.450 18:44:42 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:30.450 18:44:42 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 111852 00:37:30.450 killing process with pid 111852 00:37:30.450 Received shutdown signal, test time was about 1.000000 seconds 00:37:30.450 00:37:30.450 Latency(us) 00:37:30.450 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:30.450 =================================================================================================================== 00:37:30.450 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:30.450 18:44:42 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:37:30.450 18:44:42 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:37:30.450 18:44:42 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 111852' 00:37:30.450 18:44:42 keyring_file -- common/autotest_common.sh@967 -- # kill 111852 00:37:30.450 18:44:42 keyring_file -- common/autotest_common.sh@972 -- # wait 111852 00:37:31.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:31.826 18:44:43 keyring_file -- keyring/file.sh@117 -- # bperfpid=112340 00:37:31.826 18:44:43 keyring_file -- keyring/file.sh@119 -- # waitforlisten 112340 /var/tmp/bperf.sock 00:37:31.826 18:44:43 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 112340 ']' 00:37:31.826 18:44:43 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:31.826 18:44:43 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:31.826 18:44:43 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:31.826 18:44:43 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:31.826 18:44:43 keyring_file -- keyring/file.sh@115 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:37:31.826 18:44:43 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:31.826 18:44:43 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:37:31.826 "subsystems": [ 00:37:31.826 { 00:37:31.826 "subsystem": "keyring", 00:37:31.826 "config": [ 00:37:31.826 { 00:37:31.826 "method": "keyring_file_add_key", 00:37:31.826 "params": { 00:37:31.826 "name": "key0", 00:37:31.826 "path": "/tmp/tmp.ZkWxH40u21" 00:37:31.826 } 00:37:31.826 }, 00:37:31.826 { 00:37:31.827 "method": "keyring_file_add_key", 00:37:31.827 "params": { 00:37:31.827 "name": "key1", 00:37:31.827 "path": "/tmp/tmp.bvVeiIwjau" 00:37:31.827 } 00:37:31.827 } 00:37:31.827 ] 00:37:31.827 }, 00:37:31.827 { 00:37:31.827 "subsystem": "iobuf", 00:37:31.827 "config": [ 00:37:31.827 { 00:37:31.827 "method": "iobuf_set_options", 00:37:31.827 "params": { 00:37:31.827 "large_bufsize": 135168, 00:37:31.827 "large_pool_count": 1024, 00:37:31.827 "small_bufsize": 8192, 00:37:31.827 "small_pool_count": 8192 00:37:31.827 } 00:37:31.827 } 00:37:31.827 ] 00:37:31.827 }, 00:37:31.827 { 00:37:31.827 "subsystem": "sock", 00:37:31.827 "config": [ 00:37:31.827 { 00:37:31.827 "method": "sock_set_default_impl", 00:37:31.827 "params": { 00:37:31.827 "impl_name": "posix" 00:37:31.827 } 00:37:31.827 }, 00:37:31.827 { 00:37:31.827 "method": "sock_impl_set_options", 00:37:31.827 "params": { 00:37:31.827 "enable_ktls": false, 00:37:31.827 "enable_placement_id": 0, 00:37:31.827 "enable_quickack": false, 00:37:31.827 "enable_recv_pipe": true, 00:37:31.827 "enable_zerocopy_send_client": false, 00:37:31.827 "enable_zerocopy_send_server": true, 00:37:31.827 "impl_name": "ssl", 00:37:31.827 "recv_buf_size": 4096, 00:37:31.827 "send_buf_size": 4096, 00:37:31.827 "tls_version": 0, 00:37:31.827 "zerocopy_threshold": 0 00:37:31.827 } 00:37:31.827 }, 00:37:31.827 { 00:37:31.827 "method": "sock_impl_set_options", 00:37:31.827 "params": { 00:37:31.827 "enable_ktls": false, 00:37:31.827 "enable_placement_id": 0, 00:37:31.827 "enable_quickack": false, 00:37:31.827 "enable_recv_pipe": true, 00:37:31.827 "enable_zerocopy_send_client": false, 00:37:31.827 "enable_zerocopy_send_server": true, 00:37:31.827 "impl_name": "posix", 00:37:31.827 "recv_buf_size": 2097152, 00:37:31.827 "send_buf_size": 2097152, 00:37:31.827 "tls_version": 0, 00:37:31.827 "zerocopy_threshold": 0 00:37:31.827 } 00:37:31.827 } 00:37:31.827 ] 00:37:31.827 }, 00:37:31.827 { 00:37:31.827 "subsystem": "vmd", 00:37:31.827 "config": [] 00:37:31.827 }, 00:37:31.827 { 00:37:31.827 "subsystem": "accel", 00:37:31.827 "config": [ 00:37:31.827 { 00:37:31.827 "method": "accel_set_options", 00:37:31.827 "params": { 00:37:31.827 "buf_count": 2048, 00:37:31.827 "large_cache_size": 16, 00:37:31.827 "sequence_count": 2048, 00:37:31.827 "small_cache_size": 128, 00:37:31.827 "task_count": 2048 00:37:31.827 } 00:37:31.827 } 00:37:31.827 ] 00:37:31.827 }, 00:37:31.827 { 00:37:31.827 "subsystem": "bdev", 00:37:31.827 "config": [ 00:37:31.827 { 00:37:31.827 "method": "bdev_set_options", 00:37:31.827 "params": { 00:37:31.827 "bdev_auto_examine": true, 00:37:31.827 "bdev_io_cache_size": 256, 00:37:31.827 "bdev_io_pool_size": 65535, 00:37:31.827 "iobuf_large_cache_size": 16, 00:37:31.827 "iobuf_small_cache_size": 128 00:37:31.827 } 00:37:31.827 }, 00:37:31.827 { 00:37:31.827 "method": "bdev_raid_set_options", 00:37:31.827 "params": { 00:37:31.827 "process_max_bandwidth_mb_sec": 0, 00:37:31.827 "process_window_size_kb": 1024 00:37:31.827 } 00:37:31.827 }, 00:37:31.827 { 00:37:31.827 "method": "bdev_iscsi_set_options", 00:37:31.827 "params": { 00:37:31.827 "timeout_sec": 30 00:37:31.827 } 00:37:31.827 }, 00:37:31.827 { 00:37:31.827 "method": "bdev_nvme_set_options", 00:37:31.827 "params": { 00:37:31.827 "action_on_timeout": "none", 00:37:31.827 "allow_accel_sequence": false, 00:37:31.827 "arbitration_burst": 0, 00:37:31.827 "bdev_retry_count": 3, 00:37:31.827 "ctrlr_loss_timeout_sec": 0, 00:37:31.827 "delay_cmd_submit": true, 00:37:31.827 "dhchap_dhgroups": [ 00:37:31.827 "null", 00:37:31.827 "ffdhe2048", 00:37:31.827 "ffdhe3072", 00:37:31.827 "ffdhe4096", 00:37:31.827 "ffdhe6144", 00:37:31.827 "ffdhe8192" 00:37:31.827 ], 00:37:31.827 "dhchap_digests": [ 00:37:31.827 "sha256", 00:37:31.827 "sha384", 00:37:31.827 "sha512" 00:37:31.827 ], 00:37:31.827 "disable_auto_failback": false, 00:37:31.827 "fast_io_fail_timeout_sec": 0, 00:37:31.827 "generate_uuids": false, 00:37:31.827 "high_priority_weight": 0, 00:37:31.827 "io_path_stat": false, 00:37:31.827 "io_queue_requests": 512, 00:37:31.827 "keep_alive_timeout_ms": 10000, 00:37:31.827 "low_priority_weight": 0, 00:37:31.827 "medium_priority_weight": 0, 00:37:31.827 "nvme_adminq_poll_period_us": 10000, 00:37:31.827 "nvme_error_stat": false, 00:37:31.827 "nvme_ioq_poll_period_us": 0, 00:37:31.827 "rdma_cm_event_timeout_ms": 0, 00:37:31.827 "rdma_max_cq_size": 0, 00:37:31.827 "rdma_srq_size": 0, 00:37:31.827 "reconnect_delay_sec": 0, 00:37:31.827 "timeout_admin_us": 0, 00:37:31.827 "timeout_us": 0, 00:37:31.827 "transport_ack_timeout": 0, 00:37:31.827 "transport_retry_count": 4, 00:37:31.827 "transport_tos": 0 00:37:31.827 } 00:37:31.827 }, 00:37:31.827 { 00:37:31.827 "method": "bdev_nvme_attach_controller", 00:37:31.827 "params": { 00:37:31.827 "adrfam": "IPv4", 00:37:31.827 "ctrlr_loss_timeout_sec": 0, 00:37:31.827 "ddgst": false, 00:37:31.827 "fast_io_fail_timeout_sec": 0, 00:37:31.827 "hdgst": false, 00:37:31.827 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:31.827 "name": "nvme0", 00:37:31.827 "prchk_guard": false, 00:37:31.827 "prchk_reftag": false, 00:37:31.827 "psk": "key0", 00:37:31.827 "reconnect_delay_sec": 0, 00:37:31.827 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:31.827 "traddr": "127.0.0.1", 00:37:31.827 "trsvcid": "4420", 00:37:31.827 "trtype": "TCP" 00:37:31.827 } 00:37:31.827 }, 00:37:31.827 { 00:37:31.827 "method": "bdev_nvme_set_hotplug", 00:37:31.827 "params": { 00:37:31.827 "enable": false, 00:37:31.827 "period_us": 100000 00:37:31.827 } 00:37:31.827 }, 00:37:31.827 { 00:37:31.827 "method": "bdev_wait_for_examine" 00:37:31.827 } 00:37:31.827 ] 00:37:31.827 }, 00:37:31.827 { 00:37:31.827 "subsystem": "nbd", 00:37:31.827 "config": [] 00:37:31.827 } 00:37:31.827 ] 00:37:31.827 }' 00:37:31.827 [2024-07-22 18:44:43.762697] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:37:31.827 [2024-07-22 18:44:43.762983] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112340 ] 00:37:32.086 [2024-07-22 18:44:43.942806] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:32.344 [2024-07-22 18:44:44.237602] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:37:32.911 [2024-07-22 18:44:44.707315] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:32.911 18:44:44 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:32.911 18:44:44 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:37:32.911 18:44:44 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:37:32.911 18:44:44 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:32.911 18:44:44 keyring_file -- keyring/file.sh@120 -- # jq length 00:37:33.169 18:44:45 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:37:33.169 18:44:45 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:37:33.169 18:44:45 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:33.169 18:44:45 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:33.169 18:44:45 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:33.169 18:44:45 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:33.169 18:44:45 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:33.428 18:44:45 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:37:33.428 18:44:45 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:37:33.428 18:44:45 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:33.428 18:44:45 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:33.428 18:44:45 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:33.428 18:44:45 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:33.686 18:44:45 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:33.944 18:44:45 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:37:33.944 18:44:45 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:37:33.944 18:44:45 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:37:33.944 18:44:45 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:37:34.203 18:44:46 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:37:34.203 18:44:46 keyring_file -- keyring/file.sh@1 -- # cleanup 00:37:34.203 18:44:46 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.ZkWxH40u21 /tmp/tmp.bvVeiIwjau 00:37:34.203 18:44:46 keyring_file -- keyring/file.sh@20 -- # killprocess 112340 00:37:34.203 18:44:46 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 112340 ']' 00:37:34.203 18:44:46 keyring_file -- common/autotest_common.sh@952 -- # kill -0 112340 00:37:34.203 18:44:46 keyring_file -- common/autotest_common.sh@953 -- # uname 00:37:34.203 18:44:46 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:34.203 18:44:46 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 112340 00:37:34.203 18:44:46 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:37:34.203 killing process with pid 112340 00:37:34.203 18:44:46 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:37:34.203 18:44:46 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 112340' 00:37:34.203 18:44:46 keyring_file -- common/autotest_common.sh@967 -- # kill 112340 00:37:34.203 Received shutdown signal, test time was about 1.000000 seconds 00:37:34.203 00:37:34.203 Latency(us) 00:37:34.203 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:34.203 =================================================================================================================== 00:37:34.203 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:37:34.203 18:44:46 keyring_file -- common/autotest_common.sh@972 -- # wait 112340 00:37:35.578 18:44:47 keyring_file -- keyring/file.sh@21 -- # killprocess 111817 00:37:35.578 18:44:47 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 111817 ']' 00:37:35.578 18:44:47 keyring_file -- common/autotest_common.sh@952 -- # kill -0 111817 00:37:35.578 18:44:47 keyring_file -- common/autotest_common.sh@953 -- # uname 00:37:35.578 18:44:47 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:35.578 18:44:47 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 111817 00:37:35.578 killing process with pid 111817 00:37:35.578 18:44:47 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:37:35.578 18:44:47 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:37:35.578 18:44:47 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 111817' 00:37:35.578 18:44:47 keyring_file -- common/autotest_common.sh@967 -- # kill 111817 00:37:35.578 18:44:47 keyring_file -- common/autotest_common.sh@972 -- # wait 111817 00:37:35.578 [2024-07-22 18:44:47.430615] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:37:38.110 00:37:38.110 real 0m22.302s 00:37:38.110 user 0m50.255s 00:37:38.110 sys 0m4.186s 00:37:38.110 18:44:49 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:38.110 18:44:49 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:38.110 ************************************ 00:37:38.110 END TEST keyring_file 00:37:38.110 ************************************ 00:37:38.110 18:44:49 -- common/autotest_common.sh@1142 -- # return 0 00:37:38.110 18:44:49 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:37:38.110 18:44:49 -- spdk/autotest.sh@297 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:37:38.110 18:44:49 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:37:38.110 18:44:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:38.110 18:44:49 -- common/autotest_common.sh@10 -- # set +x 00:37:38.110 ************************************ 00:37:38.110 START TEST keyring_linux 00:37:38.110 ************************************ 00:37:38.110 18:44:49 keyring_linux -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:37:38.110 * Looking for test storage... 00:37:38.110 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:37:38.110 18:44:50 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:37:38.110 18:44:50 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:37:38.110 18:44:50 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:37:38.110 18:44:50 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:38.110 18:44:50 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:38.110 18:44:50 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:38.110 18:44:50 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:38.110 18:44:50 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:38.110 18:44:50 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:38.110 18:44:50 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:38.110 18:44:50 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:38.110 18:44:50 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:38.110 18:44:50 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:38.110 18:44:50 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0b8484e2-e129-4a11-8748-0b3c728771da 00:37:38.110 18:44:50 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=0b8484e2-e129-4a11-8748-0b3c728771da 00:37:38.110 18:44:50 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:38.110 18:44:50 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:38.110 18:44:50 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:37:38.110 18:44:50 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:38.110 18:44:50 keyring_linux -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:37:38.110 18:44:50 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:38.110 18:44:50 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:38.110 18:44:50 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:38.110 18:44:50 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:38.110 18:44:50 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:38.110 18:44:50 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:38.110 18:44:50 keyring_linux -- paths/export.sh@5 -- # export PATH 00:37:38.111 18:44:50 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:38.111 18:44:50 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:37:38.111 18:44:50 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:37:38.111 18:44:50 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:37:38.111 18:44:50 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:38.111 18:44:50 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:38.111 18:44:50 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:38.111 18:44:50 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:37:38.111 18:44:50 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:37:38.111 18:44:50 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:37:38.111 18:44:50 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:37:38.111 18:44:50 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:37:38.111 18:44:50 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:37:38.111 18:44:50 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:37:38.111 18:44:50 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:37:38.111 18:44:50 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:37:38.111 18:44:50 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:37:38.111 18:44:50 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:37:38.111 18:44:50 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:37:38.111 18:44:50 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:37:38.111 18:44:50 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:37:38.111 18:44:50 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:37:38.111 18:44:50 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:37:38.111 18:44:50 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:37:38.111 18:44:50 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:37:38.111 18:44:50 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:37:38.111 18:44:50 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:37:38.111 18:44:50 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:37:38.111 18:44:50 keyring_linux -- nvmf/common.sh@705 -- # python - 00:37:38.369 18:44:50 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:37:38.369 /tmp/:spdk-test:key0 00:37:38.370 18:44:50 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:37:38.370 18:44:50 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:37:38.370 18:44:50 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:37:38.370 18:44:50 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:37:38.370 18:44:50 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:37:38.370 18:44:50 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:37:38.370 18:44:50 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:37:38.370 18:44:50 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:37:38.370 18:44:50 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:37:38.370 18:44:50 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:37:38.370 18:44:50 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:37:38.370 18:44:50 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:37:38.370 18:44:50 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:37:38.370 18:44:50 keyring_linux -- nvmf/common.sh@705 -- # python - 00:37:38.370 18:44:50 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:37:38.370 /tmp/:spdk-test:key1 00:37:38.370 18:44:50 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:37:38.370 18:44:50 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=112519 00:37:38.370 18:44:50 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:37:38.370 18:44:50 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 112519 00:37:38.370 18:44:50 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 112519 ']' 00:37:38.370 18:44:50 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:38.370 18:44:50 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:38.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:38.370 18:44:50 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:38.370 18:44:50 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:38.370 18:44:50 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:38.370 [2024-07-22 18:44:50.324864] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:37:38.370 [2024-07-22 18:44:50.325030] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112519 ] 00:37:38.629 [2024-07-22 18:44:50.498401] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:38.887 [2024-07-22 18:44:50.824525] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:39.822 18:44:51 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:39.822 18:44:51 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:37:39.822 18:44:51 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:37:39.822 18:44:51 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:39.822 18:44:51 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:39.822 [2024-07-22 18:44:51.773236] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:39.822 null0 00:37:39.822 [2024-07-22 18:44:51.805534] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:37:39.822 [2024-07-22 18:44:51.805948] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:37:39.822 18:44:51 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:39.822 18:44:51 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:37:39.822 193965844 00:37:39.822 18:44:51 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:37:39.822 320156136 00:37:39.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:39.822 18:44:51 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=112565 00:37:39.822 18:44:51 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:37:39.822 18:44:51 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 112565 /var/tmp/bperf.sock 00:37:39.822 18:44:51 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 112565 ']' 00:37:39.822 18:44:51 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:39.822 18:44:51 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:39.822 18:44:51 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:39.822 18:44:51 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:39.822 18:44:51 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:40.081 [2024-07-22 18:44:51.958637] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:37:40.081 [2024-07-22 18:44:51.959255] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112565 ] 00:37:40.339 [2024-07-22 18:44:52.151013] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:40.597 [2024-07-22 18:44:52.475143] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:37:41.164 18:44:52 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:41.164 18:44:52 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:37:41.164 18:44:52 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:37:41.164 18:44:52 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:37:41.441 18:44:53 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:37:41.441 18:44:53 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:37:42.007 18:44:53 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:37:42.007 18:44:53 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:37:42.265 [2024-07-22 18:44:54.130135] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:42.265 nvme0n1 00:37:42.265 18:44:54 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:37:42.265 18:44:54 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:37:42.265 18:44:54 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:37:42.265 18:44:54 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:37:42.265 18:44:54 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:37:42.265 18:44:54 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:42.832 18:44:54 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:37:42.832 18:44:54 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:37:42.832 18:44:54 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:37:42.832 18:44:54 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:37:42.832 18:44:54 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:42.832 18:44:54 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:42.832 18:44:54 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:37:42.832 18:44:54 keyring_linux -- keyring/linux.sh@25 -- # sn=193965844 00:37:42.832 18:44:54 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:37:42.832 18:44:54 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:37:42.832 18:44:54 keyring_linux -- keyring/linux.sh@26 -- # [[ 193965844 == \1\9\3\9\6\5\8\4\4 ]] 00:37:42.832 18:44:54 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 193965844 00:37:42.832 18:44:54 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:37:42.832 18:44:54 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:43.091 Running I/O for 1 seconds... 00:37:44.025 00:37:44.025 Latency(us) 00:37:44.025 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:44.025 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:37:44.025 nvme0n1 : 1.01 7775.88 30.37 0.00 0.00 16312.79 12392.26 28597.53 00:37:44.025 =================================================================================================================== 00:37:44.025 Total : 7775.88 30.37 0.00 0.00 16312.79 12392.26 28597.53 00:37:44.025 0 00:37:44.025 18:44:55 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:37:44.025 18:44:55 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:37:44.284 18:44:56 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:37:44.284 18:44:56 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:37:44.284 18:44:56 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:37:44.284 18:44:56 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:37:44.284 18:44:56 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:44.284 18:44:56 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:37:44.543 18:44:56 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:37:44.543 18:44:56 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:37:44.543 18:44:56 keyring_linux -- keyring/linux.sh@23 -- # return 00:37:44.543 18:44:56 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:44.543 18:44:56 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:37:44.543 18:44:56 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:44.543 18:44:56 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:37:44.543 18:44:56 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:44.543 18:44:56 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:37:44.543 18:44:56 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:44.543 18:44:56 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:44.543 18:44:56 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:45.110 [2024-07-22 18:44:56.855257] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:37:45.110 [2024-07-22 18:44:56.856034] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002f880 (107): Transport endpoint is not connected 00:37:45.110 [2024-07-22 18:44:56.856988] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002f880 (9): Bad file descriptor 00:37:45.110 [2024-07-22 18:44:56.857979] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:37:45.110 [2024-07-22 18:44:56.858055] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:37:45.110 [2024-07-22 18:44:56.858076] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:37:45.110 2024/07/22 18:44:56 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk::spdk-test:key1 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:37:45.110 request: 00:37:45.110 { 00:37:45.110 "method": "bdev_nvme_attach_controller", 00:37:45.110 "params": { 00:37:45.110 "name": "nvme0", 00:37:45.110 "trtype": "tcp", 00:37:45.110 "traddr": "127.0.0.1", 00:37:45.110 "adrfam": "ipv4", 00:37:45.110 "trsvcid": "4420", 00:37:45.110 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:45.110 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:45.110 "prchk_reftag": false, 00:37:45.110 "prchk_guard": false, 00:37:45.110 "hdgst": false, 00:37:45.110 "ddgst": false, 00:37:45.110 "psk": ":spdk-test:key1" 00:37:45.110 } 00:37:45.110 } 00:37:45.110 Got JSON-RPC error response 00:37:45.110 GoRPCClient: error on JSON-RPC call 00:37:45.110 18:44:56 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:37:45.110 18:44:56 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:45.110 18:44:56 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:37:45.110 18:44:56 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:45.110 18:44:56 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:37:45.110 18:44:56 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:37:45.110 18:44:56 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:37:45.110 18:44:56 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:37:45.110 18:44:56 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:37:45.110 18:44:56 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:37:45.110 18:44:56 keyring_linux -- keyring/linux.sh@33 -- # sn=193965844 00:37:45.110 18:44:56 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 193965844 00:37:45.110 1 links removed 00:37:45.110 18:44:56 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:37:45.110 18:44:56 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:37:45.110 18:44:56 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:37:45.110 18:44:56 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:37:45.110 18:44:56 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:37:45.110 18:44:56 keyring_linux -- keyring/linux.sh@33 -- # sn=320156136 00:37:45.110 18:44:56 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 320156136 00:37:45.110 1 links removed 00:37:45.110 18:44:56 keyring_linux -- keyring/linux.sh@41 -- # killprocess 112565 00:37:45.110 18:44:56 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 112565 ']' 00:37:45.110 18:44:56 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 112565 00:37:45.110 18:44:56 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:37:45.110 18:44:56 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:45.110 18:44:56 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 112565 00:37:45.110 18:44:56 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:37:45.110 18:44:56 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:37:45.110 killing process with pid 112565 00:37:45.110 18:44:56 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 112565' 00:37:45.110 18:44:56 keyring_linux -- common/autotest_common.sh@967 -- # kill 112565 00:37:45.110 Received shutdown signal, test time was about 1.000000 seconds 00:37:45.110 00:37:45.110 Latency(us) 00:37:45.110 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:45.110 =================================================================================================================== 00:37:45.110 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:45.110 18:44:56 keyring_linux -- common/autotest_common.sh@972 -- # wait 112565 00:37:46.484 18:44:58 keyring_linux -- keyring/linux.sh@42 -- # killprocess 112519 00:37:46.484 18:44:58 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 112519 ']' 00:37:46.484 18:44:58 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 112519 00:37:46.484 18:44:58 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:37:46.484 18:44:58 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:46.484 18:44:58 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 112519 00:37:46.484 18:44:58 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:37:46.484 18:44:58 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:37:46.484 killing process with pid 112519 00:37:46.484 18:44:58 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 112519' 00:37:46.484 18:44:58 keyring_linux -- common/autotest_common.sh@967 -- # kill 112519 00:37:46.484 18:44:58 keyring_linux -- common/autotest_common.sh@972 -- # wait 112519 00:37:49.014 00:37:49.014 real 0m10.643s 00:37:49.014 user 0m18.223s 00:37:49.014 sys 0m2.106s 00:37:49.014 18:45:00 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:49.014 18:45:00 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:49.014 ************************************ 00:37:49.014 END TEST keyring_linux 00:37:49.014 ************************************ 00:37:49.014 18:45:00 -- common/autotest_common.sh@1142 -- # return 0 00:37:49.014 18:45:00 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:37:49.014 18:45:00 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:37:49.014 18:45:00 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:37:49.014 18:45:00 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:37:49.014 18:45:00 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:37:49.014 18:45:00 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:37:49.014 18:45:00 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:37:49.014 18:45:00 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:37:49.014 18:45:00 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:37:49.014 18:45:00 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:37:49.014 18:45:00 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:37:49.014 18:45:00 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:37:49.014 18:45:00 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:37:49.014 18:45:00 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:37:49.014 18:45:00 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:37:49.014 18:45:00 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:37:49.014 18:45:00 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:37:49.014 18:45:00 -- common/autotest_common.sh@722 -- # xtrace_disable 00:37:49.014 18:45:00 -- common/autotest_common.sh@10 -- # set +x 00:37:49.014 18:45:00 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:37:49.014 18:45:00 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:37:49.014 18:45:00 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:37:49.014 18:45:00 -- common/autotest_common.sh@10 -- # set +x 00:37:50.389 INFO: APP EXITING 00:37:50.389 INFO: killing all VMs 00:37:50.389 INFO: killing vhost app 00:37:50.389 INFO: EXIT DONE 00:37:51.359 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:37:51.359 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:37:51.359 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:37:51.982 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:37:51.982 Cleaning 00:37:51.982 Removing: /var/run/dpdk/spdk0/config 00:37:51.982 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:37:51.982 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:37:51.982 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:37:51.982 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:37:51.982 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:37:51.982 Removing: /var/run/dpdk/spdk0/hugepage_info 00:37:51.982 Removing: /var/run/dpdk/spdk1/config 00:37:51.983 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:37:51.983 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:37:51.983 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:37:51.983 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:37:51.983 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:37:51.983 Removing: /var/run/dpdk/spdk1/hugepage_info 00:37:51.983 Removing: /var/run/dpdk/spdk2/config 00:37:51.983 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:37:51.983 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:37:51.983 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:37:51.983 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:37:51.983 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:37:51.983 Removing: /var/run/dpdk/spdk2/hugepage_info 00:37:51.983 Removing: /var/run/dpdk/spdk3/config 00:37:51.983 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:37:51.983 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:37:51.983 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:37:51.983 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:37:51.983 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:37:51.983 Removing: /var/run/dpdk/spdk3/hugepage_info 00:37:51.983 Removing: /var/run/dpdk/spdk4/config 00:37:51.983 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:37:51.983 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:37:51.983 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:37:51.983 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:37:51.983 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:37:51.983 Removing: /var/run/dpdk/spdk4/hugepage_info 00:37:51.983 Removing: /dev/shm/nvmf_trace.0 00:37:51.983 Removing: /dev/shm/spdk_tgt_trace.pid61443 00:37:51.983 Removing: /var/run/dpdk/spdk0 00:37:51.983 Removing: /var/run/dpdk/spdk1 00:37:51.983 Removing: /var/run/dpdk/spdk2 00:37:51.983 Removing: /var/run/dpdk/spdk3 00:37:51.983 Removing: /var/run/dpdk/spdk4 00:37:51.983 Removing: /var/run/dpdk/spdk_pid100463 00:37:51.983 Removing: /var/run/dpdk/spdk_pid101835 00:37:51.983 Removing: /var/run/dpdk/spdk_pid102455 00:37:51.983 Removing: /var/run/dpdk/spdk_pid102458 00:37:51.983 Removing: /var/run/dpdk/spdk_pid104406 00:37:51.983 Removing: /var/run/dpdk/spdk_pid104519 00:37:51.983 Removing: /var/run/dpdk/spdk_pid104618 00:37:51.983 Removing: /var/run/dpdk/spdk_pid104728 00:37:51.983 Removing: /var/run/dpdk/spdk_pid104911 00:37:51.983 Removing: /var/run/dpdk/spdk_pid105007 00:37:51.983 Removing: /var/run/dpdk/spdk_pid105104 00:37:51.983 Removing: /var/run/dpdk/spdk_pid105207 00:37:51.983 Removing: /var/run/dpdk/spdk_pid105584 00:37:51.983 Removing: /var/run/dpdk/spdk_pid106294 00:37:52.241 Removing: /var/run/dpdk/spdk_pid107644 00:37:52.241 Removing: /var/run/dpdk/spdk_pid107853 00:37:52.241 Removing: /var/run/dpdk/spdk_pid108140 00:37:52.241 Removing: /var/run/dpdk/spdk_pid108452 00:37:52.241 Removing: /var/run/dpdk/spdk_pid109016 00:37:52.241 Removing: /var/run/dpdk/spdk_pid109022 00:37:52.241 Removing: /var/run/dpdk/spdk_pid109407 00:37:52.241 Removing: /var/run/dpdk/spdk_pid109567 00:37:52.241 Removing: /var/run/dpdk/spdk_pid109722 00:37:52.241 Removing: /var/run/dpdk/spdk_pid109825 00:37:52.241 Removing: /var/run/dpdk/spdk_pid109974 00:37:52.241 Removing: /var/run/dpdk/spdk_pid110092 00:37:52.241 Removing: /var/run/dpdk/spdk_pid110785 00:37:52.241 Removing: /var/run/dpdk/spdk_pid110822 00:37:52.241 Removing: /var/run/dpdk/spdk_pid110857 00:37:52.241 Removing: /var/run/dpdk/spdk_pid111319 00:37:52.241 Removing: /var/run/dpdk/spdk_pid111353 00:37:52.241 Removing: /var/run/dpdk/spdk_pid111390 00:37:52.241 Removing: /var/run/dpdk/spdk_pid111817 00:37:52.241 Removing: /var/run/dpdk/spdk_pid111852 00:37:52.241 Removing: /var/run/dpdk/spdk_pid112340 00:37:52.241 Removing: /var/run/dpdk/spdk_pid112519 00:37:52.241 Removing: /var/run/dpdk/spdk_pid112565 00:37:52.241 Removing: /var/run/dpdk/spdk_pid61216 00:37:52.241 Removing: /var/run/dpdk/spdk_pid61443 00:37:52.241 Removing: /var/run/dpdk/spdk_pid61727 00:37:52.241 Removing: /var/run/dpdk/spdk_pid61848 00:37:52.241 Removing: /var/run/dpdk/spdk_pid61916 00:37:52.241 Removing: /var/run/dpdk/spdk_pid62044 00:37:52.241 Removing: /var/run/dpdk/spdk_pid62080 00:37:52.241 Removing: /var/run/dpdk/spdk_pid62234 00:37:52.241 Removing: /var/run/dpdk/spdk_pid62532 00:37:52.241 Removing: /var/run/dpdk/spdk_pid62728 00:37:52.241 Removing: /var/run/dpdk/spdk_pid62848 00:37:52.241 Removing: /var/run/dpdk/spdk_pid62964 00:37:52.241 Removing: /var/run/dpdk/spdk_pid63082 00:37:52.241 Removing: /var/run/dpdk/spdk_pid63127 00:37:52.241 Removing: /var/run/dpdk/spdk_pid63169 00:37:52.241 Removing: /var/run/dpdk/spdk_pid63233 00:37:52.241 Removing: /var/run/dpdk/spdk_pid63361 00:37:52.241 Removing: /var/run/dpdk/spdk_pid64015 00:37:52.241 Removing: /var/run/dpdk/spdk_pid64102 00:37:52.241 Removing: /var/run/dpdk/spdk_pid64194 00:37:52.241 Removing: /var/run/dpdk/spdk_pid64222 00:37:52.241 Removing: /var/run/dpdk/spdk_pid64375 00:37:52.241 Removing: /var/run/dpdk/spdk_pid64408 00:37:52.241 Removing: /var/run/dpdk/spdk_pid64566 00:37:52.241 Removing: /var/run/dpdk/spdk_pid64600 00:37:52.241 Removing: /var/run/dpdk/spdk_pid64681 00:37:52.241 Removing: /var/run/dpdk/spdk_pid64711 00:37:52.241 Removing: /var/run/dpdk/spdk_pid64781 00:37:52.241 Removing: /var/run/dpdk/spdk_pid64822 00:37:52.241 Removing: /var/run/dpdk/spdk_pid65026 00:37:52.241 Removing: /var/run/dpdk/spdk_pid65068 00:37:52.241 Removing: /var/run/dpdk/spdk_pid65149 00:37:52.241 Removing: /var/run/dpdk/spdk_pid65248 00:37:52.241 Removing: /var/run/dpdk/spdk_pid65284 00:37:52.241 Removing: /var/run/dpdk/spdk_pid65362 00:37:52.241 Removing: /var/run/dpdk/spdk_pid65409 00:37:52.241 Removing: /var/run/dpdk/spdk_pid65461 00:37:52.241 Removing: /var/run/dpdk/spdk_pid65502 00:37:52.241 Removing: /var/run/dpdk/spdk_pid65554 00:37:52.241 Removing: /var/run/dpdk/spdk_pid65595 00:37:52.241 Removing: /var/run/dpdk/spdk_pid65647 00:37:52.241 Removing: /var/run/dpdk/spdk_pid65694 00:37:52.241 Removing: /var/run/dpdk/spdk_pid65740 00:37:52.241 Removing: /var/run/dpdk/spdk_pid65787 00:37:52.241 Removing: /var/run/dpdk/spdk_pid65832 00:37:52.241 Removing: /var/run/dpdk/spdk_pid65881 00:37:52.241 Removing: /var/run/dpdk/spdk_pid65933 00:37:52.241 Removing: /var/run/dpdk/spdk_pid65974 00:37:52.241 Removing: /var/run/dpdk/spdk_pid66026 00:37:52.241 Removing: /var/run/dpdk/spdk_pid66077 00:37:52.241 Removing: /var/run/dpdk/spdk_pid66125 00:37:52.241 Removing: /var/run/dpdk/spdk_pid66174 00:37:52.241 Removing: /var/run/dpdk/spdk_pid66224 00:37:52.241 Removing: /var/run/dpdk/spdk_pid66276 00:37:52.241 Removing: /var/run/dpdk/spdk_pid66329 00:37:52.241 Removing: /var/run/dpdk/spdk_pid66411 00:37:52.241 Removing: /var/run/dpdk/spdk_pid66557 00:37:52.241 Removing: /var/run/dpdk/spdk_pid67022 00:37:52.241 Removing: /var/run/dpdk/spdk_pid67392 00:37:52.241 Removing: /var/run/dpdk/spdk_pid69970 00:37:52.241 Removing: /var/run/dpdk/spdk_pid70015 00:37:52.241 Removing: /var/run/dpdk/spdk_pid70343 00:37:52.513 Removing: /var/run/dpdk/spdk_pid70398 00:37:52.513 Removing: /var/run/dpdk/spdk_pid70799 00:37:52.513 Removing: /var/run/dpdk/spdk_pid71344 00:37:52.513 Removing: /var/run/dpdk/spdk_pid71804 00:37:52.513 Removing: /var/run/dpdk/spdk_pid72852 00:37:52.513 Removing: /var/run/dpdk/spdk_pid73873 00:37:52.513 Removing: /var/run/dpdk/spdk_pid74009 00:37:52.513 Removing: /var/run/dpdk/spdk_pid74099 00:37:52.513 Removing: /var/run/dpdk/spdk_pid75620 00:37:52.513 Removing: /var/run/dpdk/spdk_pid75955 00:37:52.513 Removing: /var/run/dpdk/spdk_pid82784 00:37:52.513 Removing: /var/run/dpdk/spdk_pid83176 00:37:52.513 Removing: /var/run/dpdk/spdk_pid83786 00:37:52.513 Removing: /var/run/dpdk/spdk_pid84218 00:37:52.514 Removing: /var/run/dpdk/spdk_pid84221 00:37:52.514 Removing: /var/run/dpdk/spdk_pid84280 00:37:52.514 Removing: /var/run/dpdk/spdk_pid84340 00:37:52.514 Removing: /var/run/dpdk/spdk_pid84401 00:37:52.514 Removing: /var/run/dpdk/spdk_pid84447 00:37:52.514 Removing: /var/run/dpdk/spdk_pid84450 00:37:52.514 Removing: /var/run/dpdk/spdk_pid84482 00:37:52.514 Removing: /var/run/dpdk/spdk_pid84527 00:37:52.514 Removing: /var/run/dpdk/spdk_pid84530 00:37:52.514 Removing: /var/run/dpdk/spdk_pid84589 00:37:52.514 Removing: /var/run/dpdk/spdk_pid84649 00:37:52.514 Removing: /var/run/dpdk/spdk_pid84710 00:37:52.514 Removing: /var/run/dpdk/spdk_pid84755 00:37:52.514 Removing: /var/run/dpdk/spdk_pid84758 00:37:52.514 Removing: /var/run/dpdk/spdk_pid84790 00:37:52.514 Removing: /var/run/dpdk/spdk_pid85116 00:37:52.514 Removing: /var/run/dpdk/spdk_pid85283 00:37:52.514 Removing: /var/run/dpdk/spdk_pid85523 00:37:52.514 Removing: /var/run/dpdk/spdk_pid90863 00:37:52.514 Removing: /var/run/dpdk/spdk_pid91340 00:37:52.514 Removing: /var/run/dpdk/spdk_pid91444 00:37:52.514 Removing: /var/run/dpdk/spdk_pid91603 00:37:52.514 Removing: /var/run/dpdk/spdk_pid91661 00:37:52.514 Removing: /var/run/dpdk/spdk_pid91713 00:37:52.514 Removing: /var/run/dpdk/spdk_pid91771 00:37:52.514 Removing: /var/run/dpdk/spdk_pid91958 00:37:52.514 Removing: /var/run/dpdk/spdk_pid92113 00:37:52.514 Removing: /var/run/dpdk/spdk_pid92414 00:37:52.514 Removing: /var/run/dpdk/spdk_pid92561 00:37:52.514 Removing: /var/run/dpdk/spdk_pid92834 00:37:52.514 Removing: /var/run/dpdk/spdk_pid92978 00:37:52.514 Removing: /var/run/dpdk/spdk_pid93137 00:37:52.514 Removing: /var/run/dpdk/spdk_pid93507 00:37:52.514 Removing: /var/run/dpdk/spdk_pid93914 00:37:52.514 Removing: /var/run/dpdk/spdk_pid93928 00:37:52.514 Removing: /var/run/dpdk/spdk_pid96245 00:37:52.514 Removing: /var/run/dpdk/spdk_pid96570 00:37:52.514 Removing: /var/run/dpdk/spdk_pid97088 00:37:52.514 Removing: /var/run/dpdk/spdk_pid97097 00:37:52.514 Removing: /var/run/dpdk/spdk_pid97454 00:37:52.514 Removing: /var/run/dpdk/spdk_pid97475 00:37:52.514 Removing: /var/run/dpdk/spdk_pid97490 00:37:52.514 Removing: /var/run/dpdk/spdk_pid97523 00:37:52.514 Removing: /var/run/dpdk/spdk_pid97533 00:37:52.514 Removing: /var/run/dpdk/spdk_pid97681 00:37:52.514 Removing: /var/run/dpdk/spdk_pid97690 00:37:52.514 Removing: /var/run/dpdk/spdk_pid97794 00:37:52.514 Removing: /var/run/dpdk/spdk_pid97797 00:37:52.514 Removing: /var/run/dpdk/spdk_pid97907 00:37:52.514 Removing: /var/run/dpdk/spdk_pid97910 00:37:52.514 Removing: /var/run/dpdk/spdk_pid98380 00:37:52.514 Removing: /var/run/dpdk/spdk_pid98423 00:37:52.514 Removing: /var/run/dpdk/spdk_pid98568 00:37:52.514 Removing: /var/run/dpdk/spdk_pid98681 00:37:52.514 Removing: /var/run/dpdk/spdk_pid99096 00:37:52.514 Removing: /var/run/dpdk/spdk_pid99352 00:37:52.514 Removing: /var/run/dpdk/spdk_pid99870 00:37:52.514 Clean 00:37:52.772 18:45:04 -- common/autotest_common.sh@1451 -- # return 0 00:37:52.772 18:45:04 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:37:52.772 18:45:04 -- common/autotest_common.sh@728 -- # xtrace_disable 00:37:52.772 18:45:04 -- common/autotest_common.sh@10 -- # set +x 00:37:52.772 18:45:04 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:37:52.772 18:45:04 -- common/autotest_common.sh@728 -- # xtrace_disable 00:37:52.772 18:45:04 -- common/autotest_common.sh@10 -- # set +x 00:37:52.772 18:45:04 -- spdk/autotest.sh@387 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:37:52.772 18:45:04 -- spdk/autotest.sh@389 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:37:52.772 18:45:04 -- spdk/autotest.sh@389 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:37:52.772 18:45:04 -- spdk/autotest.sh@391 -- # hash lcov 00:37:52.772 18:45:04 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:37:52.772 18:45:04 -- spdk/autotest.sh@393 -- # hostname 00:37:52.772 18:45:04 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1716830599-074-updated-1705279005 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:37:53.028 geninfo: WARNING: invalid characters removed from testname! 00:38:25.093 18:45:34 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:38:26.467 18:45:38 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:38:29.745 18:45:41 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:38:33.021 18:45:44 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:38:35.561 18:45:47 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:38:38.848 18:45:50 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:38:41.377 18:45:53 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:38:41.635 18:45:53 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:38:41.635 18:45:53 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:38:41.635 18:45:53 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:41.635 18:45:53 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:41.635 18:45:53 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:41.635 18:45:53 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:41.635 18:45:53 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:41.635 18:45:53 -- paths/export.sh@5 -- $ export PATH 00:38:41.635 18:45:53 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:41.635 18:45:53 -- common/autobuild_common.sh@446 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:38:41.635 18:45:53 -- common/autobuild_common.sh@447 -- $ date +%s 00:38:41.635 18:45:53 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721673953.XXXXXX 00:38:41.635 18:45:53 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721673953.xNpcdJ 00:38:41.635 18:45:53 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:38:41.635 18:45:53 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:38:41.635 18:45:53 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:38:41.635 18:45:53 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:38:41.635 18:45:53 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:38:41.635 18:45:53 -- common/autobuild_common.sh@463 -- $ get_config_params 00:38:41.635 18:45:53 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:38:41.635 18:45:53 -- common/autotest_common.sh@10 -- $ set +x 00:38:41.635 18:45:53 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-vfio-user --with-avahi --with-golang' 00:38:41.635 18:45:53 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:38:41.635 18:45:53 -- pm/common@17 -- $ local monitor 00:38:41.635 18:45:53 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:41.635 18:45:53 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:41.635 18:45:53 -- pm/common@25 -- $ sleep 1 00:38:41.635 18:45:53 -- pm/common@21 -- $ date +%s 00:38:41.635 18:45:53 -- pm/common@21 -- $ date +%s 00:38:41.635 18:45:53 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721673953 00:38:41.635 18:45:53 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721673953 00:38:41.635 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721673953_collect-vmstat.pm.log 00:38:41.635 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721673953_collect-cpu-load.pm.log 00:38:42.570 18:45:54 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:38:42.570 18:45:54 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:38:42.570 18:45:54 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:38:42.570 18:45:54 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:38:42.570 18:45:54 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:38:42.570 18:45:54 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:38:42.570 18:45:54 -- spdk/autopackage.sh@19 -- $ timing_finish 00:38:42.570 18:45:54 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:38:42.570 18:45:54 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:38:42.570 18:45:54 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:38:42.570 18:45:54 -- spdk/autopackage.sh@20 -- $ exit 0 00:38:42.570 18:45:54 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:38:42.570 18:45:54 -- pm/common@29 -- $ signal_monitor_resources TERM 00:38:42.570 18:45:54 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:38:42.570 18:45:54 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:42.570 18:45:54 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:38:42.570 18:45:54 -- pm/common@44 -- $ pid=114351 00:38:42.570 18:45:54 -- pm/common@50 -- $ kill -TERM 114351 00:38:42.570 18:45:54 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:42.570 18:45:54 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:38:42.570 18:45:54 -- pm/common@44 -- $ pid=114353 00:38:42.570 18:45:54 -- pm/common@50 -- $ kill -TERM 114353 00:38:42.570 + [[ -n 5166 ]] 00:38:42.570 + sudo kill 5166 00:38:43.954 [Pipeline] } 00:38:43.976 [Pipeline] // timeout 00:38:43.984 [Pipeline] } 00:38:44.006 [Pipeline] // stage 00:38:44.012 [Pipeline] } 00:38:44.032 [Pipeline] // catchError 00:38:44.043 [Pipeline] stage 00:38:44.045 [Pipeline] { (Stop VM) 00:38:44.060 [Pipeline] sh 00:38:44.342 + vagrant halt 00:38:48.530 ==> default: Halting domain... 00:38:55.126 [Pipeline] sh 00:38:55.397 + vagrant destroy -f 00:38:59.579 ==> default: Removing domain... 00:38:59.593 [Pipeline] sh 00:38:59.870 + mv output /var/jenkins/workspace/nvmf-tcp-vg-autotest/output 00:38:59.878 [Pipeline] } 00:38:59.898 [Pipeline] // stage 00:38:59.904 [Pipeline] } 00:38:59.923 [Pipeline] // dir 00:38:59.930 [Pipeline] } 00:38:59.947 [Pipeline] // wrap 00:38:59.954 [Pipeline] } 00:38:59.971 [Pipeline] // catchError 00:38:59.982 [Pipeline] stage 00:38:59.984 [Pipeline] { (Epilogue) 00:39:00.007 [Pipeline] sh 00:39:00.287 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:39:08.406 [Pipeline] catchError 00:39:08.408 [Pipeline] { 00:39:08.425 [Pipeline] sh 00:39:08.702 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:39:08.964 Artifacts sizes are good 00:39:08.970 [Pipeline] } 00:39:08.986 [Pipeline] // catchError 00:39:08.997 [Pipeline] archiveArtifacts 00:39:09.004 Archiving artifacts 00:39:09.153 [Pipeline] cleanWs 00:39:09.164 [WS-CLEANUP] Deleting project workspace... 00:39:09.164 [WS-CLEANUP] Deferred wipeout is used... 00:39:09.170 [WS-CLEANUP] done 00:39:09.172 [Pipeline] } 00:39:09.190 [Pipeline] // stage 00:39:09.196 [Pipeline] } 00:39:09.212 [Pipeline] // node 00:39:09.217 [Pipeline] End of Pipeline 00:39:09.240 Finished: SUCCESS